Tech giants vow to fight AI-generated election ‘deepfakes’

Share

Major technology firms vow to combat disinformation campaigns during elections with the use of advanced AI technologies to detect and mitigate the spread of fake videos, commonly known as “deepfakes”.

At the Munich Security Conference (MSC), the companies signed a tech accord to fight the deceptive use of AI in the 2024 elections. The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters, they said in a statement.

The global Tech giants are committing to combatting the spread of AI-generated deepfake videos, especially those related to elections. These deepfakes use artificial intelligence to manipulate or create realistic-looking videos that can spread misinformation and manipulate public opinion during elections. Companies like Google, Meta, TikTok, and others are pledging to develop tools and strategies to identify and counter these deceptive videos to protect the integrity of democratic processes. They announced the agreement as political and security leaders gathered at the Munich Security Conference in Germany.

Deepfakes can be used to spread false information, misleading the public and undermining trust in credible sources. They can manipulate public opinion by presenting fabricated events or statements as real, influencing elections, social discourse, etc. They can be used to defame individuals, incite violence, or create political unrest by spreading false narratives. Voters may also be misled about where, when, and how to cast their votes by using these tweaks.

The accord signatories are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

The alliance assured the public that it will be transparent about how it fights AI-generated falsehoods that leverage their platforms to try and sabotage elections. However, fighting them will require “a whole-of-society response,” the companies said. Technical tools like “metadata, watermarking, classifiers, or other forms of provenance or detection techniques” can’t fully eliminate the risks of AI, suggesting that the initiative would need the support of governments and other organizations to raise public awareness on the issue of deepfakes. Today, almost anyone may digitally produce or modify images and videos in convincing ways to trick and mislead voters.

However, the idea has been criticized by those in the technology industry who felt that it would take focus away from the need to regulate and supervise tech businesses.

“Google has been supporting election integrity for years and the accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust,” said Kent Walker, President, Global Affairs at Google.

“Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices,” said Ambassador Dr Christoph Heusgen, Chairman of the Munich Security Conference.

As society embraces the benefits of AI, “we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, Vice Chair and President of Microsoft.

Deepfakes keep on multiplying, meanwhile. According to data from Clarity, a deepfake detection firm, the number of deepfakes that have been created increased 900% year over year.

Last month, AI robocalls mimicking U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election. And in November, just days before Slovakia’s elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election.

Read more

Recommended For You