
A lady walks in entrance of the Kremlin on September 23, 2024. U.S. intelligence officers say Russia has embraced synthetic intelligence instruments to attempt to sway American voters forward of the November election.
Alexander Nemenov/AFP through Getty Photos/AFP
conceal caption
toggle caption
Alexander Nemenov/AFP through Getty Photos/AFP
Russia is probably the most prolific international affect actor utilizing synthetic intelligence to generate content material focusing on the 2024 presidential election, U.S. intelligence officers said on Monday.
The cutting-edge expertise is making it simpler for Russia in addition to Iran to shortly and extra convincingly tailor often-polarizing content material geared toward swaying American voters, an official from the Workplace of the Director of Nationwide Intelligence, who spoke on situation of anonymity, advised reporters at a briefing.
“The [intelligence community] considers AI a malign affect accelerant, not but a revolutionary affect instrument,” the official mentioned. “In different phrases, info operations are the risk, and AI is an enabler.”
Intelligence officers have beforehand mentioned they noticed AI utilized in elections abroad. “Our replace right now makes clear that that is now taking place right here,” the ODNI official mentioned.
Russian influence operations have unfold artificial pictures, video, audio, and text on-line, officers mentioned. That features AI-generated content material “of and about distinguished U.S. figures” and materials looking for to emphasise divisive points comparable to immigration. Officers mentioned that’s in step with the Kremlin’s broader purpose to spice up former President Donald Trump and denigrate Vice President Kamala Harris.
However Russia can also be utilizing lower-tech strategies. The ODNI official mentioned Russian affect actors staged a video wherein a girl claimed to be a sufferer of a hit-and-run by Harris in 2011. There’s no evidence that ever occurred. Final week, Microsoft additionally mentioned Russia was behind the video, which was unfold by a web site claiming to be a nonexistent native San Francisco TV station.
Russia can also be behind manipulated movies of Harris’s speeches, the ODNI official mentioned. They might have been altered utilizing enhancing instruments or with AI. They have been disseminated on social media and utilizing different strategies.
“One of many efforts we see Russian affect actors do is, once they create this media, attempt to encourage its unfold,” the ODNI official mentioned.
The official mentioned the movies of Harris had been altered in a variety of the way, to “paint her in a foul mild each personally but in addition compared to her opponent” and to concentrate on points Russia believes are divisive.
Iran has additionally tapped AI to generate social media posts and write fake stories for web sites posing as professional information shops, officers mentioned. The intelligence neighborhood has mentioned Iran is looking for to undercut Trump within the 2024 election.
Iran has used AI to create such content material in each English and Spanish, and is focusing on Individuals “throughout the political spectrum on polarizing points” together with the battle in Gaza and the presidential candidates, officers mentioned.
China, the third important international risk to U.S. elections, is utilizing AI in its broader affect operations that purpose to form international views of China and amplify divisive matters within the U.S. comparable to drug use, immigration, and abortion, officers mentioned.
Nonetheless, officers mentioned that they had not recognized any AI-powered operations focusing on the end result of voting within the U.S. The intelligence neighborhood has mentioned Beijing’s affect operations are extra targeted on down-ballot races within the U.S. than the presidential contest.
U.S. officers, lawmakers, tech corporations, and researchers have been involved concerning the potential for AI-powered manipulation to upend this yr’s election marketing campaign, comparable to deepfake videos or audio depicting candidates doing or saying something they didn’t or deceptive voters concerning the voting course of.
Whereas these threats might but nonetheless materialize as election day attracts nearer, to date AI has been used extra regularly in several methods: by international adversaries to improve productivity and increase quantity, and by political partisans to generate memes and jokes.
On Monday, the ODNI official mentioned international actors have been gradual to beat three important obstacles to AI-generated content material changing into a higher threat to American elections: first, overcome guardrails constructed into many AI instruments with out being detected; second, develop their very own refined fashions; and third, strategically goal and distribute AI content material.
As Election Day nears, the intelligence neighborhood will likely be monitoring for international efforts to introduce misleading or AI-generated content material in quite a lot of methods, together with “laundering materials by distinguished figures,” utilizing pretend social media accounts or web sites posing as information shops, or “releasing supposed ‘leaks’ of AI-generated content material that seem delicate or controversial,” the ODNI report mentioned.
Earlier this month, the Justice Division accused Russian state broadcaster RT, which the U.S. authorities says operates as an arm of Russian intelligence providers, of funneling nearly $10 million to pro-Trump American influencers who posted movies crucial of Harris and Ukraine. The influencers say they didn’t know the cash got here from Russia.
Add comment