AI’s Role in Sudan’s Disinformation: Impersonating Leaders

Share

Hundreds of thousands of views on TikTok have been garnered by a campaign that utilizes artificial intelligence to impersonate Omar al-Bashir, the former leader of Sudan. This has contributed to online confusion in a country already devastated by civil war.

Starting in late August, an unidentified account has been actively sharing what it alleges are “leaked recordings” featuring the ex-president. This channel has distributed numerous video clips, each purportedly containing sensitive content. However, it is crucial to underline that the voice heard in these recordings is not genuine. Instead, it has been artificially generated or manipulated, raising questions about the authenticity and credibility of the material being presented. This deceptive use of technology adds to the already complex online landscape and contributes to further uncertainty in a country already grappling with the challenges of civil war.

Bashir, who stands accused of orchestrating war crimes and was overthrown by the military in 2019, has not made any public appearances for an entire year, fueling speculation about his deteriorating health. He staunchly denies the allegations of war crimes against him.

The enigma surrounding his whereabouts introduces an additional element of instability to a nation already in turmoil. This turmoil was exacerbated when clashes erupted in April between the military, which currently holds power, and the rival Rapid Support Forces militia group.

Campaigns of this nature hold considerable importance because they serve as potent examples of how modern tools and technology can facilitate the swift and cost-effective dissemination of fraudulent content across social media channels. This phenomenon underscores the evolving challenges associated with misinformation and manipulation in the digital age.

Such campaigns underscore the need for vigilant monitoring and countermeasures to mitigate the spread of fake content, safeguard public trust, and uphold the integrity of online information spaces. As technology continues to advance, it is imperative for societies and online platforms to adapt and implement strategies that combat the proliferation of deceptive content and promote responsible digital engagement.

“It is the democratisation of access to sophisticated audio and video manipulation technology that has me most worried”

says Hany Farid

who researches digital forensics at the University of California, Berkeley, in the US.

“Sophisticated actors have been able to distort reality for decades, but now the average person with little to no technical expertise can quickly and easily create fake content.”

The recordings are being uploaded on a channel named “The Voice of Sudan.” These posts seem to be a blend of older clips sourced from press conferences during coup attempts, snippets from news reports, and several so-called “leaked recordings” attributed to Bashir. Often, these posts claim to be excerpts from meetings or phone conversations, with audio quality deliberately degraded to mimic the effects of a poor telephone connection.

To verify their authenticity, we initially sought the expertise of a group of Sudan specialists at BBC Monitoring. Ibrahim Haithar, one of these experts, indicated that these recordings were unlikely to be recent:

“The voice sounds like Bashir but he has been very ill for the past few years and doubt he would be able to speak so clearly.”

This doesn’t mean it’s not him.

Image source – BBC

We also explored other potential explanations, ruling out the possibility that this was an old clip resurfacing or the work of an impressionist mimicking Bashir’s voice.

However, the most compelling evidence emerged from a user on X (formerly Twitter). They were able to identify the very first of the Bashir recordings, which had been posted in August 2023. In this recording, Bashir purportedly criticized General Abdel Fattah Burhan, the commander of the Sudanese army.

What made this discovery particularly significant was that this Bashir recording matched a Facebook Live broadcast aired just two days prior by a popular Sudanese political commentator, known as Al Insirafi. Although Al Insirafi is believed to reside in the United States, he has never shown his face on camera.

While the voices of Bashir and Al Insirafi don’t bear a strong resemblance, the scripts used in both recordings were identical. Furthermore, when the two clips were played simultaneously, they aligned perfectly in terms of timing.

Detailed analysis of the audio waves also revealed strikingly similar patterns in speech and periods of silence, as noted by Mr. Farid, shedding light on the likely origins and manipulation of these recordings.

The evidence strongly suggests that voice conversion software has been employed to replicate Bashir’s speech. This software is a potent tool that enables the alteration of one’s voice by uploading an initial audio sample, which can then be transformed into a different voice.

Upon further investigation, a pattern emerged, revealing that at least four more of the Bashir recordings had been extracted from the live broadcasts of the same blogger, with no evidence pointing to their involvement in this campaign.

The TikTok account in question is solely focused on political content and requires an in-depth understanding of the situation in Sudan. The motives behind this campaign remain a subject of debate. However, a consistent theme throughout the recordings is criticism directed at General Burhan, the head of the army.

The motivation behind this campaign could be twofold: it may aim to deceive audiences into believing that Bashir has resurfaced to play a role in the ongoing conflict, or it might be an attempt to legitimize a particular political perspective by utilizing the voice of the former leader. The exact angle or agenda at play remains unclear.

The Voice of Sudan, which operates this campaign, denies any intent to mislead the public and claims no affiliation with any specific groups. When contacted, the account responded with a text message stating: “I want to communicate my voice and explain the reality that my country is going through in my style.”

This extensive effort to impersonate Bashir carries significant implications for the region and has the potential to deceive audiences, as noted by Henry Ajder, whose series on BBC Radio 4 explored the evolution of synthetic media.

AI experts have long been concerned that the proliferation of fake videos and audio could give rise to widespread disinformation, potentially inciting unrest and disrupting elections. Mohamed Suliman, a researcher at Northeastern University’s Civic AI Lab, points out the alarming prospect that these manipulated recordings might even lead people to distrust genuine audio recordings, further complicating the landscape of truth and deception in the digital age.

Steps to spot audio-based disinformation?

Identifying audio-based disinformation can be challenging, but there are several steps you can take to help spot it:

Check the Source:

Start by verifying the source of the audio. Is it from a reputable news organization or a known credible source? Be skeptical of content from unverified or anonymous sources.

Listen Carefully:

Pay close attention to the content of the audio. Does it sound credible and coherent, or does it contain inconsistencies, unusual pauses, or abrupt edits that suggest manipulation?

Verify the Date and Context:

Determine when and where the audio was recorded. Is it current, or does it relate to a different event or time period? Context is crucial for understanding the meaning and relevance of the audio.

Compare with Other Sources:

If possible, cross-reference the information in the audio with other reliable sources. Do other sources corroborate or contradict the claims made in the audio?

Analyze the Speaker’s Voice:

Consider whether the voice in the audio matches what you would expect from the purported speaker. Listen for distinctive characteristics, accents, or speech patterns that may raise suspicions.

Check for Audio Manipulation:

Be alert for signs of audio manipulation, such as unnatural changes in pitch, tone, or background noise. Advanced editing techniques can make it difficult to spot alterations, but careful listening may reveal inconsistencies.

Look for Red Flags:

Be wary of sensational or extreme claims in the audio. Such claims should prompt further scrutiny and fact-checking.

Consult Fact-Checking organizations:

Utilize fact-checking websites and organizations that specialize in verifying information. They often investigate and debunk false or misleading audio content.

Consider the Motivation:

Think about why someone might want to create or spread disinformation through audio. Consider the political, social, or economic motivations behind the content.

Educate Yourself:

Familiarize yourself with emerging technologies, such as deepfake and voice synthesis, that can manipulate audio. Understanding these techniques can help you recognize potential manipulations.

Report Suspicious Content:

If you come across audio content that appears to be disinformation, report it to the platform where you found it and alert relevant authorities or fact-checking organizations.

Promote Media Literacy:

Encourage media literacy among your peers and in your community. Educating others about how to spot disinformation can be a powerful defense against its spread.

The need for media literacy, critical evaluation of content, and proactive measures to address evolving disinformation threats becomes increasingly apparent, as technology continues to blur the line between fact and fiction in the digital information landscape.

Read more

Recommended For You