Applied Trust and Safety Initiative

2024 Elections and AI Case Studies: Beware the Six-Fingered Man

“I do not mean to pry, but you don’t by any chance happen to have six fingers on your right hand?” – Inigo Montoya

Just as the character Inigo Montoya in The Princess Bride searched for the sixth finger as a telltale sign of the villain he sought to vanquish, deepfake images used to be easily distinguishable by the presence of anomalous characteristics such as superfluous fingers (often of “unusual size”). However, just in time for this year’s elections globally, spotting a deep fake is no longer straightforward due to the rapid progress of AI.

Next Wednesday, July 24, at TrustCon, our cross-sector panel will tackle the pressing topic of AI’s impact on the 2024 elections thus far. As we navigate through this pivotal Year of Elections — with more than half of the world’s population expected to vote by the end of 2024 — the role of AI technologies, particularly generative AI (GenAI), has become a focal point. These technologies are rapidly reshaping the online ecosystems that voters depend on for essential information — ranging from voter registration to polling updates and election outcomes. The potential threats these new technologies pose to information integrity and democratic processes have sparked widespread concern. How have these threats materialized so far? We’ll be examining this and more during the session.

We are a diverse group of researchers, policy advisors, and industry practitioners with extensive experience in AI and election integrity at leading tech companies, academic institutions, and in civil society. We have direct experience responding to evolving technological threats to global elections over the past decade. Recognizing the emergence of generative AI as a potential tool for bad actors, we have been monitoring the impact of AI on elections this year in countries across the democratic spectrum. In this preview of our panel discussion, we share emerging patterns in the questionable use of AI in elections globally. We also present insights from the latest research on the impact of AI-generated content on users of social media and messaging apps, highlighting key trends to watch in the remaining elections of 2024 that might help us understand AI’s broader societal impacts. 

Our collective experience in election-related information integrity led us to hypothesize that generative AI would be used to produce content on recurring election misinformation themes. We observed informational trends during elections including those in Taiwan, Indonesia, Pakistan, Türkiye, India, and South Africa. In key cases from these countries, we observed numerous instances of AI-generated content used for campaigning and political purposes, including targeting politicians, political parties, and highlighting geopolitical or social issues. So far, we have not seen extensive use of AI-generated content to cast doubt on election outcomes or procedures.

According to our case studies, generative AI has been used as a tool to facilitate two main political aims across elections: 

  1. Propaganda and disinformation. Generative AI has been incorporated into the toolbox for generating propaganda and disinformation by state and non-state actors with a history of election interference for political and economic gains. Perhaps the most significant contribution of GenAI tools is their ability to generate and disseminate fake content with unparalleled speed, volume, and distinctiveness. These tools can rapidly create synthetic videos and audio, which are then distributed across social media platforms, messaging apps, and traditional media. The high velocity and scale of content creation enable disinformation profiteers to flood information channels with misleading content, making it difficult for the truth to prevail. Although these techniques are relatively new, they highlight both the strengths and vulnerabilities in a country’s resilience and response to disinformation.
  2. Charismatic digital versions of candidates. Candidates have also used generative AI to construct more appealing digital versions of themselves, intended to leave a positive impression and, ultimately, influence their choice at the ballot box. While this content doesn’t necessarily aim to overtly deceive voters, it often helps controversial political figures portray themselves more favorably. These cases raise critical questions about manipulation of perceptions and what society will expect and tolerate regarding authentic representation from politicians.

Exploring Vulnerabilities 

Research has demonstrated that exposure to targeted bot networks, automated accounts on social media that impersonate real users, can influence or otherwise amplify perceptions. While it is too soon to draw conclusions on the direct impacts of AI-generated content on election outcomes, the conversational persuasiveness of AI-generated text, the interactive nature of AI bots, and the ability to produce uncensored content significantly increase the complexity of combating election disinformation. 

A field experiment in India revealed that users on mobile messaging platforms were more likely to fall for fake news in video form than in text. This highlights a troubling scenario with the increasingly cheap access to high-quality deepfake video made possible by GenAI. Another study, conducted in five countries, showed that people’s perceptions of content reliability vary based on their understanding of labels such as ‘AI-Generated’, ‘Manipulated’, and ‘Deepfake,’ underscoring the need for thoughtful design decisions for user-facing countermeasures to protect elections against influence operations. On the other hand, researchers from the University of Bristol found that even when warned that videos were fake, people tended to believe deepfake videos were real, and looked for signs of authenticity in the content. This raises questions about the efficacy of preventative measures against AI-generated disinformation. 

Despite these challenges, there is hope. For example, just as AI can power disinformation campaigns, large language models (advanced AI systems that can process and generate human language) like GPT-4 can potentially be employed to counter conspiracy theories effectively. 

Did the ecosystem learn from 2020?

Ensuring a free and fair election requires a global community and a set of standards. When it comes to preventing technology from unduly influencing election outcomes, mitigation efforts are for the most part left to individual companies and countries to enforce and ensure consistency. Through our combined research, we found that a combination of media literacy training and increased transparency, along with sharing election procedural information from authoritative sources, may have played a major role in reducing opportunities for misinformation to be wielded effectively. This success is partly due to the guardrails implemented by some AI tools such as Gemini, Claude3, and ChatGPT and its plugins. These tools either prevented the creation of election-related AI content or adopted approaches to limit the sharing of election-related information.

We look forward to going through our selected case studies at our panel at TrustCon on Wednesday, July 24th at 1:30 PM. For those unable to attend, we will be hosting a livestream follow-up on August 1st, at 1:00 PM hosted by All Tech Is Human. 

Photo credit: “Six-fingered man,” Rob Reiner, The Princess Bride, 1987.