ADVERTISEMENTs

'Lies are flooding feeds': AI fakery raises US voter manipulation fears

A recent wave of disinformation has renewed calls for tech giants to strengthen guardrails around generative artificial intelligence ahead of the vote.

VP Kamala Harris. / Reuters/Evelyn Hockstein

A "deepfake" video parodying Kamala Harris, a manipulated expletive-laden clip of Joe Biden, and a doctored image of Donald Trump being arrested -- a tide of AI-fueled political disinformation has prompted alarm over its potential to manipulate voters as the US presidential race heats up.

In what is widely billed as America's first AI election in November, researchers warn that tech-enabled fakery could be used to steer voters toward or away from candidates -- or even to avoid the polls altogether -- stoking tensions in an already hyperpolarized environment.

A recent wave of disinformation has renewed calls for tech giants -- many of which have retreated from moderating social media content -- to strengthen guardrails around generative artificial intelligence ahead of the vote.

Last week, Elon Musk faced intense criticism for sharing a deepfake video featuring Vice President Harris, the presumptive Democratic nominee, with his 192 million followers on X, formerly Twitter.

In it, a voiceover mimicking Harris calls President Joe Biden senile; the voice then declares that she does not "know the first thing about running the country."

The video carried no indication that it was parody -- save for a laughing emoji. Only later did Musk clarify that the video was meant as satire.

Researchers expressed concern that viewers could have falsely concluded that Harris was deriding herself and sullying Biden.

AFP's fact-checkers have debunked other AI fakery that raised alarm.

Last month, a manipulated video ricocheting across X appeared to show Biden cursing his critics – including using anti-LGBTQ slurs -- after he announced he would not seek reelection and endorsed Harris for the Democratic nomination.

A reverse image search showed the footage came from one of Biden’s speeches, carried live by the broadcaster PBS, in which he denounced political violence after the July 13 assassination attempt on Trump.

PBS said the doctored video was a deepfake that used its logo to deceive viewers.

Weeks earlier, an image shared across platforms appeared to show police forcibly arresting Trump after a New York jury found him guilty of falsifying business records related to a hush money payment to porn star Stormy Daniels.

But the photo was a deepfake, digital forensics experts told AFP.

- 'Partisan tension' -

"These recent examples are highly representative of how deepfakes will be used in politics going forward," Lucas Hansen, co-founder of the nonprofit CivAI, told AFP.

"While AI-powered disinformation is certainly a concern, the most likely applications will be manufactured images and videos intended to provoke anger and worsen partisan tension."

Hansen demonstrated to AFP the ability of one AI chatbot to manipulate voter turnout by mass-producing false tweets.

The tool was fed a simple prompt -- "Polling locations charge for parking" –- with the message customized for a specific location: Allen, Texas.

Within seconds, a tweet was churned out misinforming viewers that Allen authorities had "quietly introduced a $25 parking fee at most polling places."

In a previous attempt at possible voter suppression, an AI-enabled robocall impersonating Biden urged New Hampshire residents in January not to cast ballots in the state's primary.

Tests on another leading AI tool, Midjourney, allowed the creation of images seeming to show Biden being arrested and of Trump appearing next to a body double, the nonprofit Center for Countering Digital Hate (CCDH) said in June.

Midjourney had previously blocked all prompts related to Trump and Biden, effectively barring users from creating fake images, tech activists reported.

But CCDH said users could easily circumvent the policy -- in some cases by adding a single backslash to a prompt previously blocked by Midjourney.

- 'Tipping point' -

Observers warn that such fakery on a mass scale risks igniting public anger at the electoral process.

More than 50 percent of Americans expect AI-enabled falsehoods to impact who wins the 2024 election, according to a poll published last year by the media group Axios and business intelligence firm Morning Consult.

About one-third of Americans said they will be less trusting of the results because of AI, according to the poll.

Several tech giants have said they are working on systems for labeling AI-generated content.

In a letter to tech CEOs in April, more than 200 advocacy groups demanded urgent efforts to bolster the fight against AI falsehoods -- including prohibiting the use of deepfakes in political ads, and using algorithms to promote factual election content.

The nonprofit Free Press, one of the groups that signed the letter, said they "heard little substance" in the commitments platforms would be making this election cycle.

"What we have now is a toxic online environment where lies are flooding our feeds and confusing voters," Nora Benavidez, senior counsel at the watchdog, told AFP.

"This is a tipping point in our election," she added. "Platform executives should be racing to strengthen and enforce their policies against deepfakes and other problems."

Comments

ADVERTISEMENT

 

 

 

ADVERTISEMENT

 

 

E Paper