A compilation of AI-generated images of Donald Trump and Kamala Harris kissing, Harris addressing a Communist gather, and Taylor swift dressed as Uncle Sam.
AI-generated images of Donald Trump, Kamala Harris and celebrities, including Taylor Swift, have been making the rounds in election-related social media posts. Illustration by Zachary Shelton.

If you see something online that gives you any kind of emotional rise, it’s wise to be skeptical, a Virginia Tech librarian said in a recent interview with Cardinal News.

There’s plenty of that going around as the United States heads toward the first Tuesday in November. The two days before Election Day — and the two days after — could be harrowing, U.S. Sen. Mark Warner, D-Va., says.

Russia, China and Iran have already been working to influence voters, said Warner, who chairs the Senate’s Select Committee on Intelligence. Warner gaveled a public hearing with tech executives in Washington on Sept. 18 and talked about the hearing with Virginia reporters the following day.

Along with the social media-based threats of the past two presidential elections comes artificial intelligence, which the senator believes elevates the existing problems, he wrote to the federal Cybersecurity and Infrastructure Security Agency’s director on Friday. Warner urged CISA Director Jen Easterly to increase assistance to state and local governments working to deal with misinformation and disinformation that could impact the 2024 election and beyond.

He noted that recent AI-generated robocalls placed during New Hampshire’s primary impersonated President Biden, telling voters to stay home and wait to vote in the general election.

“Although AI alone has not changed the threat landscape observed in previous elections, it has supercharged the threats and adjusted the risk calculus,” Warner wrote to Easterly. “CISA should likewise adjust with this change in risk to ensure that election offices and the public have the necessary protections in place to remain resilient against AI-enhanced threats.”

Julia Feerrar, who leads Digital Literacy Initiatives for the University Libraries, has developed a digital toolkit to help people understand more about AI’s influence and how to think critically about what it can present. For example, she said, an AI overview featured on Google’s search engine recently claimed that Barack Obama was the first Muslim U.S. president. Another AI claim: Eating rocks is beneficial for good health.

“I think there are multiple reasons that’s happening,” Feerrar told Cardinal News. “Biases and incorrect information in the data the tool was trained on can definitely be a reason that those hallucinations are happening.”

Feerrar said her expertise isn’t on the why, but on the what to do about it.

“Any time that you are looking at something online that gives you pause or even just sparks a big emotional reaction, whether that’s surprise or intrigue or excitement or anger, that’s a great indicator to take a closer look at something,” she said.

Ask important general questions: Who created this? Where did this come from? Can I even tell the source? 

“And if you’re not so sure or you just see a general title, but you’re not sure what that means, you can open up a new tab in your browser, open up your search engine of choice and see: What is this thing that I’m looking at? See if other sources are reporting on the same thing that you’re seeing. Try to find that from multiple sources.”

In general, it is far too early to trust information from AI, even when it looks or reads persuasively, she said.

“AI really magnifies how much that can be dangerous and not always work, because things can look great, but we need that other outside context to help us evaluate what we’re looking at,” she said.

Feerrar added: “If we’re talking about breaking news or election information or high-stakes decisions, that’s not when we wanna go to an AI chatbot. That’s where we want to be seeking out trustworthy information from fact-checked news sources or local government websites, for elections specifically. 

“I think there are some things that an AI tool can be helpful with, as you’re brainstorming something or thinking about the process of something, but it’s not where I want to stop for information that I’m using, especially to make a decision.”

Recent wrinkles: foreign agents, news impersonators and influenced influencers  

Warner, in the Intelligence Committee hearing, mentioned multiple examples of something that looked realistic but was generated to confuse. 

A Russian campaign used AI tools to create web pages that accurately impersonated such media sources as The Washington Post and Fox News, “with the goal of spreading what sounds like credible-sounding narratives to really shape American voters’ perceptions of candidates and campaigns,” Warner said.

The other example, which has received wider media attention, was the covert project that state-owned Russia Today bankrolled to pay “unwitting U.S. political influencers” posting on YouTube, he said.

“Our adversaries realize this is effective and cheap,” he said in the hearing.

Warner, with photos of the fake Post and Fox pages projected behind him, expressed surprise that those outlets were neither concerned with nor reporting on the ruse. 

“The fact that that’s not making people’s heads explode, the fact that, frankly, Fox News and Washington Post themselves don’t seem that concerned, concerns me greatly,” he told reporters the next day.

He and other senators, including his intelligence co-chair, Sen. Marco Rubio, R-Fla., gave the tech companies in attendance some credit for helping unearth such schemes while adding that they had more to do — as does Congress itself, Warner added. Alabama, Texas, Michigan, Florida and California, however, have addressed deep-fake manipulations with recently passed laws, he said.

“And unfortunately Congress has not been able to take on this issue,” Warner said. “I wish we could take some of the best ideas from those states and bring them to the national level.”

He decried an increasing reliance on social media as a news source for people who don’t trust governments or media.

“This is really our effort to urge you guys to do more, to kind of alert the public that this problem is not going away,” he said. “Lord knows we have enough differences between Americans. Those differences don’t need to be exacerbated by our foreign adversaries.”

On the plus side, the recent British and French elections didn’t experience massive AI interference, Warner added. 

Not that all is quiet. Federal prosecutors late last month announced charges against alleged hackers working against former President Donald Trump on Iran’s behalf. Russian efforts on social media appeared to be aimed in Trump’s favor against Vice President Kamala Harris, prosecutors said in early September. 

All that happened before October, which Virginia’s senior senator said will likely feature more attempts at interference. 

Days immediately before and after Election Day will be critical

Kent Walker, Google owner Alphabet’s president for global affairs and its chief legal officer, sat in front of the Intelligence Committee. Facebook owner Meta’s Nick Clegg, president of global affairs, and Microsoft Vice Chair and President Brad Smith joined him.

One tech giant was conspicuous in its absence that day: X, formerly known as Twitter, Warner noted in both forums last week. He said he was concerned that X, “the worst offender, the platform that has done the most to reduce any kind of content review to make sure that users even follow their own rules of engagement,” did not reply to the committee’s invitation to appear.

Multiple senators also mentioned TikTok. That site, which a new U.S. law dictates must sell to someone not under China’s influence, was also unrepresented in the hearing.

“What I’ve called for from all of these tech firms is, what is their surge capability going to be in those 48-to-72 hours before the election, where people will be probably smearing all kinds of misinformation out there,” he told reporters on Sept. 20.

Smith told the panel that Microsoft was preparing for the 48 hours before the election. He cited Russian interference in recent European elections. Le Monde and other news sources reported in March and April that Czech and Polish intelligence discovered a Russian network operating the Voice of Europe website and using it to work for Russia and against Ukraine. Authorities shut down the site before June’s elections.

Preserving the right to free expression is Microsoft’s “north star,” Smith said, but the company is working to “prevent adversaries from exploiting American products and platforms.”

Clegg told the panel that Meta, too, values free expression, but also continues to adapt to stay ahead of “emerging challenges” and has removed more than 200 networks since 2017.

“People trying to interfere in elections rarely target a single platform,” Clegg said.

Facebook, however, was at the center of the Cambridge Analytica scandal during the 2016 election. More recently, it missed the spoofed news sites and Russian-source advertising, Warner said. He asked Clegg to have Meta send the committee information by about Sept. 25 on how many users viewed the ads and links to the sites.

The company had not sent that information by the end of last week, a Warner spokeswoman said.

“I think they are getting through in many more ways than had been represented here,” Warner said.

None of the tech representatives had specific answers about dealing with post-election-day chicanery. Warner discussed possible scenarios with reporters the next day.

One possibility could involve someone posing as a local election official, someone whose face isn’t known in the same way that a national candidate’s face might be. The person could pop up on a video on social media, purportedly destroying ballots. Or a spoofed website might blast a headline reporting false information about election results. 

“In 2016 and 2020, we were mostly focused on what happens up to the election,” he said. “The vast majority of us, you never expected the aftermath in 2020, we never expected January 6. We never expected an unwillingness or an attempt to undermine the legitimate vote of Americans about which votes would be certified.”

AI’s new twist on cat videos 

To date, AI’s chief election role has been as a meme generator. The Associated Press reported such images as a video of Trump riding a cat while wielding an assault rifle, a mustache-wearing Harris dressed in “communist attire” and the two candidates in “a passionate embrace.”

There are plenty of other tools available to sew real discord, and they are part of a long line of misinformation and disinformation methods, said Feerrar, the Virginia Tech librarian. Critical thinking remains key to separating honesty from subterfuge.

“It’s been around for a very long time, and it’s something I’ve been talking about with folks throughout my career, but of course, it long, long, long, predates me as well,” she said. “But I do think that AI raises the stakes in some ways, in terms of making it easier to create and share questionable content.

“And luckily, we can use many of the same strategies and ask some of the same questions of AI that we would any kind of information.”

Tad Dickens is technology reporter for Cardinal News. He previously worked for the Bristol Herald Courier...

One reply on “AI is the new tool in town for potential election interference”

Comments are closed.