A photo of arrested Donald Trump, a video showing a dark future in the event of President Joe Biden’s re-election, or the audio recording of an argument between the two men. These social media posts have one thing in common: they are completely fake.
All were created using artificial intelligence (AI), a rapidly growing technology. Experts fear it will cause a deluge of false information during the 2024 presidential election, undoubtedly the first vote where its use will be widespread.
The temptation of all sides
Democrats and Republicans alike will be tempted to turn to artificial intelligence, which is cheap, accessible and not particularly legally framed, to better seduce voters or produce leaflets at the snap of a finger. But experts fear the tool could also be used to wreak havoc in a divided country where some voters still believe the 2020 election was stolen from former President Donald Trump, despite evidence to the contrary.
In March, fake AI-generated footage showing him being stopped by police officers went viral, offering a glimpse of what the 2024 campaign could look like. Last month, in response to Joe Biden’s candidacy announcement, the Republican Party released a video also made via AI predicting a nightmarish future if they are re-elected. The realistic images, although fake, showed China invading Taiwan or a collapse in the financial markets.
“New Tools to Fuel Hate”
And earlier this year, an audio recording of Donald Trump and Joe Biden violently insulting each other made the rounds on TikTok. It was fake, of course, and again produced using AI. For Joe Rospars, the founder of the digital agency Blue State, has bad intentions, with this technology, “new tools to fuel hatred” and “to confuse the press and the public”. Fighting them “will require vigilance from the media, the tech companies and the voters themselves,” he says. Regardless of the intentions of the person using it, the effectiveness of AI is undeniable.
When AFP asked ChatGPT to create a political newsletter in favor of Donald Trump and feed him false information that he spread, the interface in seconds drew a lick of text full of lies. And when the bot was asked to make the text “more aggressive,” it reproduced these false claims in an even more doomsday tone.
Distrust of the media does not help
“Right now the AI is lying a lot,” says Dan Woods, a former official in Joe Biden’s 2020 campaign. should prepare for a much more intense disinformation campaign than in 2016,” he said. At the same time, this technology can also help better understand voters, especially those who do not vote or vote little, assures Vance Reavie, head of Junction AI.
Artificial intelligence makes it possible to “understand exactly what interests them and why, and from there we can decide how to engage them and what policies will interest them”, he says. It can also save campaign teams time when they have to write speeches, tweets or questionnaires to voters. But “much of the content generated will be fake,” notes Vance Reavie. The distrust of the mainstream media by many Americans does not help matters.
Even easier to lie
“The fear is that as it becomes easier to manipulate the media, it will be easier to deny reality,” said Hany Farid, a professor at the University of California at Berkeley. “For example, if a candidate says something inappropriate or illegal, they can simply say that the admission is fake. It is particularly dangerous”.
Despite the fears, the technology is already at work. Betsy Hoover of Higher Ground Labs told AFP her company was developing a project to write and evaluate the effectiveness of fundraising emails using AI.
“Those with bad intentions will use whatever tools are available to them to achieve their goals, and artificial intelligence is no exception,” said this former Barack Obama campaign official in 2012. “But I don’t think this fear should prevent us in taking advantage of AI”.