The Impact of Artificial Intelligence on the 2024 Election
5 mins read

The Impact of Artificial Intelligence on the 2024 Election

During a Congressional Internet Caucus Academy review this week, experts argued the effect of artificial intelligence if the 2024 election was less extreme than predicted – but deep fakes and disinformation still played a role.

There was major concerns ahead of the 2024 election that AI would disrupt elections through false information; total, the impact was less extreme than experts had warned it could be. But AI still had an effect, seen through deepfakes, which Biden robocall and misinformation from AI-powered chatbots.

“We didn’t see any widespread use of AI tools to create deep fakes that would in any way influence the election,” said Jennifer Huddleston, senior fellow in technology policy at the Cato Institute.


And while the widespread AI-powered “Apocalypse” predicted by some experts did not materialize, there was still a significant amount of misinformation. The Biden robocall was the most notable deepfake example of this election cycle. But as Tim Harper, senior policy analyst and project manager for the Center for Democracy and Technology, explained, there were several instances of AI being misused. These included fake websites created by foreign governments and deepfakes that spread false information about candidates.

In addition to that kind of misinformation, Harper emphasized that a big problem was how AI tools could be used to target people at a more micro level than previously seen, which he said happened during this election cycle. Examples include AI-generated texts to Wisconsin students deemed intimidating, and incidents of non-English disinformation campaigns targeting Hispanic voters, intended to create confusion. AI’s role in this election, Harper said, has affected public trust and perception of truth.

One positive trend seen this year, according to Huddleston, was that the existing information ecosystem helped combat AI-driven disinformation. For example, with the Biden robocall, there was a quick response, allowing voters to be more informed and discerning about what to believe.

Huddleston said she believes it’s too early to predict exactly how this technology will develop and what AI’s public perception and adoption might look like. But she said using education as a policy tool could help improve understanding of AI risks and reduce misinformation.

Internet literacy is still evolving, Harper said; he expects to see a similarly slow increase in AI literacy and adoption: “I think public education around this type of threat is very important.”

AI REGULATION AND ELECTIONS

While bipartisan legislation was introduced to combat AI-generated deepfakes, it was not approved before the election. However, there are other regulatory protections.

Harper pointed to the Federal Communications Commission (FCC) ruling that the Telephone Consumer Protection Act (TCPA) regulates robocalls using artificially generated speech. So this applies to the Biden robocall, the the perpetrators were brought to justice.

Unfortunately, shortcomings in the legislation still remain, even in this case. The TCPA does not apply to nonprofit organizations, religious institutions, or calls to landlines. Harper said the FCC is being transparent with and pushing to close such “loopholes.”

When it comes to legislation to combat AI risks, Huddleston said that in many cases there are already some safeguards in place, and she argued that the problem is not always AI technology itself, but rather misuse. She said those regulating this technology should be careful not to wrongly condemn technologies that could be beneficial, but consider whether the problems are new or if they are existing problems with AI creating an extra layer.

There has been many states that have implemented their own AI legislationand Huddleston warned that this “patchwork” of legislation could create barriers to developing and deploying AI technology.

Harper noted that there are valid First Amendment concerns about over-regulation of AI. He argued that more regulation is needed, but whether that can come through agency-level regulation or new legislation is yet to be seen.

To combat the lack of comprehensive federal legislation addressing AI use in elections, many private technology companies have sought to self-regulate. According to Huddleston, this is not only due to pressure from the authorities but also to consumer demand.

Huddleston noted that broad definitions of AI in the regulatory world can also inadvertently limit beneficial applications of AI.

She explained that many of these are harmless applications, such as speech-to-text software and navigation platforms for finding the best route between campaign events. The use of AI for things like captioning can also build capacity for campaigns with limited resources.

AI can help identify potential cases of a campaign being hacked, Huddleston said, helping campaigns be more proactive in the event of a security threat.

“It’s not just the campaigns that can benefit from some use of this technology,” Harper said, noting that election officials can use this technology to educate voters, inform planning, conduct post-election analysis and increase efficiency.

While this information addressed the impact of AI on the election, there are still questions about the impact of the election on AI. It is important to note that the incoming administration’s platform included recalling that of the Biden administration executive order on AIHuddleston said, adding that whether it will be recalled and replaced or recalled without a replacement remains to be seen.