AI: Voice Cloning Tech Emerges In Sudan Civil War
A campaign using artificial intelligence to impersonate former Sudanese leader Omar al-Bashir has been viewed tens of thousands of times on TikTok, sowing further confusion online in a country ravaged by civil war.
An anonymous account has been publishing what it claims to be “leaked recordings” of the former president since the end of August. The channel posted dozens of clips, but the votes were fake.
Bashir, accused of organizing war crimes and ousted by the military in 2019, has not been seen in public for a year and is said to be seriously ill. He denies war crimes charges.
The mystery surrounding his fate after fighting that broke out in April between the current ruling army and a rival militia, the Rapid Support Forces, has added even more uncertainty to a country already in crisis.
Such campaigns are important because they show how new tools can spread fake content on social media quickly and cheaply, experts say.
“It’s the democratization of access to advanced audio and video manipulation technologies that worries me the most,” said Hany Farid, a digital forensics researcher at the University of California, Berkeley.
“Experienced actors have been able to distort reality for decades, but now an average person without technical skills can create fake content quickly and easily.”
The recording was broadcast on a channel called Voice of Sudan. The messages appear to be a mix of old clips from press conferences during the coup attempt, news reports and "leaked images" attributed to Bashir. The messages often claim to be from meetings or phone conversations and are as harsh as you would expect from a bad phone connection.
To verify its authenticity, we first consulted a team of Sudanese experts from BBC Monitoring. Ibrahim Haither tells us that this may not be new:
"His voice sounds like Bashir's, but he has been seriously ill for several years and I doubt he can speak clearly."
That doesn't mean it's not there.
We looked at other possible explanations, but this is not an old clip resurrected and it does not appear to be the work of an impressionist.
The most compelling evidence comes from former Twitter user X.
They recognized Bashir's first recording, released in August 2023. It shows the leader criticizing the commander of the Sudanese army, General Abdel Fattah Burhan.
Bashir's recording coincided with a Facebook live broadcast two days earlier by Al Insirafi, a prominent Sudanese political commentator. He is believed to live in the United States, but his face was not seen on camera.
The pairing doesn't sound the same, but the script is the same, and if you play both clips at the same time, they will be perfectly in sync.
Comparing audio waves shows similar patterns of speech and silence, Farid said.
There is evidence that voice conversion software was used to transcribe Bashir's speech. This software is a powerful tool that allows you to download an audio track that can be converted into different sounds.
After digging deeper, a pattern emerged. We found at least four other recordings of Bashir from the same blogger's live stream. There is no evidence of his involvement.
The TikTok account is purely political and requires in-depth knowledge of what is happening in Sudan, but there is controversy over who is profiting from the campaign. A consistent narrative is the criticism of the military commander, General Burhan.
The motive may have been to make the public believe that Bashir had played a role in the war. The channel may also attempt to legitimize a particular political point of view by using the voice of its former leader. It is not yet clear what this approach might entail.
Voice of Sudan denies misleading the public and maintains that the organization is not affiliated with any group. We contacted the account and received a response via SMS saying: “I want to express my voice and explain in my own way the reality that my country is experiencing.”
Attempts to imitate Bashir on a large scale could be seen as "relevant to the region" and perhaps a potential deception of the public, said Henry Azder, whose BBC Radio 4 series explores the development of synthetic media.
Artificial intelligence experts have long feared that fake video and audio recordings could trigger waves of confusion, potentially triggering riots and disrupting elections.
“What’s concerning is that this information may also create an environment where many people don’t trust real information,” said Mohammad Suliman, a researcher at Northeastern University’s Civil AI Lab.
How do you find audio distractions?
As we saw in this example, people should ask themselves questions before sharing images.
It is important to verify that the audio is published by a reliable source. However, tone is difficult to control, especially when content is shared through messaging apps. The challenge is even greater when there is social unrest, as is currently the case in Sudan.
The technology to create algorithms trained to recognize synthetic audio is still in the early stages of development, while the technology to simulate speech is already quite advanced.
After the BBC contacted TikTok, the account was suspended.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home