Applied Trust and Safety Initiative

Q&A: Hannah Ajakaiye on manipulated media in the 2023 Nigerian presidential elections, generative AI, and possible interventions

By Eric Davis on March 18, 2024

Cover Photo by the Commonwealth Secretariat.

In early 2023, voters in Nigeria’s presidential election were inundated with election disinformation. AI-generated deep fakes, as well as paid posts falsely linking candidates to militant or separatist groups, filled social media platforms. In the months leading up to the election, journalist Hannah Ajakaiye spearheaded local efforts to fight the proliferation of misinformation with FactsMatterNG, a fact-checking initiative she founded to restore information integrity on digital platforms. 

The Nigeria presidential election was perhaps the world’s first instance of a country’s democratic process being marred by such means. The scope of the problem was beyond the resources of local journalists to address. The problem will continue to grow, Ajakaiye told IST’s Senior Vice President for Special Projects Eric Davis.

As AI-generated deep fakes, shallow fakes (media manipulated via conventional means), and the subsequent spreading of disinformation continue to proliferate in the 2024 U.S. election cycle and other elections worldwide, IST has launched the Generative Identity Initiative (GII) in an effort to better understand how generative artificial intelligence affects social trust and institutional decision making. 

GII will build off the work of IST’s Digital Cognition and Democracy Initiative (DCDI), which in 2022 published its foundational report, Rewired: How Digital Technologies Shape Cognition and Democracy. With a powerful coalition of more than 50 subject matter experts, DCDI found that the impacts of digitally influenced cognition could undermine the very core elements of democratic society—specifically, due to its impact on the independent, critically thinking minds of individuals. With advancing generative artificial intelligence, however, this has proven no longer to be a hypothetical. 

In this interview, Ajakaiye, now a John S. Knight Journalism Fellow at Stanford University, sat down with Eric Davis to discuss the proliferation of generative AI in the 2023 Nigerian presidential election, her observations on its effects, and possible mechanisms for intervention. 

Edited Transcript

Q: Would you share a little about your background and what motivated you to become a journalist?

A: I’ve worked as a journalist in Nigeria for over eight years now. My interest in journalism was motivated by the need to change society growing up, being inspired by the stories of people around me, especially political figures in my country. Growing up, I had this interest in public service, going into politics because I wanted to make a difference. And in Nigeria, you have some leading examples of people who started in journalism before going into politics. So that’s what I was aiming for by going into journalism. 

In college, I studied language. I wrote for press and newspaper organizations in Nigeria. So I went into journalism fully, and when I started, I was reporting on issues around social justice and developments; development around the Sustainable Development Goals or issues around water and sanitation, children displaced by Boko Haram crises, basically, those issues. 

In 2018, I did a fellowship with the International Center for Journalists (ICFJ), the TruthBuzz fellowship. The fellowship was about looking for interesting ways to essentially make truth go viral on social media platforms. Before then, in 2017, I had an opportunity to do an internship at the Wall Street Journal and The Times of London in the UK. This introduced me to how digital technology helps to engage audiences, talking about news, innovation, and digital technology for wider audience reach. So when I saw the call for the TruthBuzz fellowship, I thought I could apply some of the insights and learnings from the London fellowship. I wanted to see how we can expand the audience for fact-checking content in Nigeria, since people don’t read fact checks because they feel like they’re very boring. 

At the ICFJ I initially worked as a TruthBuzz fellow from 2018 to 2019. I became a Knight Fellow with ICFJ in 2020 and then progressed to initiating a project about using influencers to combat disinformation about the Covid-19 pandemic. Later on, we got support from the National Endowment for Democracy (NED) to do an election fact-checking project in Nigeria. I did that for 14 months before coming to Stanford University to resume an offer as a visiting fellow for the John S. Knight Journalism (a.k.a., JSK) Fellowships.

Q: How are you finding the Knight fellowship so far?

A: I find it very interesting because it’s an opportunity to use the resources at Stanford to implement projects and initiatives around challenges confronting journalism. My project at Stanford as a JSK journalism fellow is focused on combating disinformation on encrypted social media. So, I’m exploring interesting ways of combating disinformation on a platform like WhatsApp, which is very popular in Nigeria. The urgent need to address this problem was one of the insights I got from working on the election disinformation project. I felt there’s a lot of misinformation on WhatsApp, and not a lot of organizations in Nigeria are looking into creative ways of combating disinformation on that platform.

Q: Diving into the election – what are examples of the different types of manipulated media that were used to spread disinformation?

A: Some of the deepfakes that we saw during the Nigerian election were intended to enhance the public perception of Peter Obi, a presidential candidate with a large youth following. Obi also had the most engaged social media presence of all the candidates. 

There was a deepfake video of Hollywood actors endorsing Obi and a deepfake of Elon Musk and Donald Trump declaring support for him as well. Endorsements from celebrities or public figures from the West are considered a weighty validation of political campaigns, so those were two prominent examples targeted toward improving the public perception of a particular candidate. 

There was also a shallow fake video debunked by Reuters which showed Bola Tinubu facing criticism for an incoherent response during a Chatham House event during the campaigns.

Another deepfake that made the rounds a few hours to the voting was a manipulated audio of another candidate, Nigeria’s former vice president Atiku Abubakar, discussing plans to rig the election. That was a deepfake audio that had some impact on people’s perceptions, judging by the audio’s viral spread. 

Q: The dialogue in that audio deepfake was not subtle. It was somewhat reminiscent of a movie with formulaic villains.

A: Yes, one would expect that people should be able to decipher that but because people have their biases and preconceived opinions, they won’t even allow for critical thinking when engaging with a manipulated content of this nature. Another factor to consider is that many people are not digitally literate. 

Some of the deepfakes would play to confirmation bias. The audio deepfake news about Atiku and his supporters planning to rig the election strengthened people’s belief that the election could be rigged. This wasn’t hard to believe, since Nigerian politicians have been known to rig elections in the past. Other deepfakes about Peter Obi are a kind of confirmation that this is the person that can actually bring the change that young people want, since he’s getting endorsements from American celebrities and public figures. It’s also about validation from the West, convincing people that this is the person who can lead Nigeria to the promised land. 

Q: What’s your read on where most of the deepfakes originated? Candidate campaigns, semi-unaffiliated groups? 

A: For a candidate like Peter Obi, I would say that the deepfakes were from semi-unaffiliated groups since he has a cult-like following of social media users. They call themselves “Obi-dients.” I don’t think that some of the videos were commissioned from the campaign, because of the qualities of the videos. I believe those are deepfakes created by followers trying to prove a point, especially since supporters of each of the top three candidates were trying to outshine each other on social media platforms. 

During the campaign period, the online debate and conversations about the elections were very bitter and toxic, especially on a platform like Twitter. Supporters of other candidates also engaged in lots of disinformation activities by producing shallow fakes, like manipulated photos and videos. 

However, the audio deepfake could have been commissioned by a campaign, owing to the timing of the release. Aside from deepfakes and shallowfakes, a major disinformation tactic commissioned by campaigns was the use of paid influencers and the manipulation of social media algorithms to trend certain hashtags. 

Q: Did the other top candidates have online supporters equivalent to the “Obi-dients?”

A: Yes. Supporters of Atiku Abubakar, former vice president of Nigeria, referred to themselves as the “Atikulated”, Bola Tinubu’s supporters were the “BATified.” BATified is a blending of Bola Tinubu’s full name – Bola Ahmed Tinubu. So you had these hashtags denoting different movements supporting the three top contenders.

Q: Were there distinguishing characteristics of each candidate’s online supporters?

A: Obi had more young people and organic followers. And among some of these young people are those with tribal sentiments for Obi’s candidacy so ethnicity played a factor in the election. Some of his followers wanted to prove a point desperately. This was reflected in the nature of deepfakes created and the people who amplified their reach on their social media platforms. 

Supporters and operatives of other candidates spread disinformation as well. I believe the audio deepfake about Atiku Abubakar trying to rig the election could have been produced by the opposing camps. Some of the political parties invested in situation rooms or media centers where social media influencers were paid to spread disinformation.

Q: Let’s go back to shallow fakes in a moment — would you elaborate on your previous point, that some of the disinformation was spread by social media influencers paid by the campaigns?

A: A BBC investigation revealed that some social media influencers were paid by campaigns. And not just the Bola Tinubu camp — I think it’s on both sides of the divide since Atiku and Tinubu are known to be moneybags. Even during the 2019 elections, the previous government invested in a ‘disinformation army’ consisting of influencers and digital foot soldiers paid to distort public opinion through coordinated disinformation campaigns and trolling.

Q: Are there existing policies, or efforts underway, to regulate deepfakes and shallow fakes in campaigns and elections?

A: The Nigerian government doesn’t have any concrete policies regulating the use of generative AI or synthetic media. The new administration is trying to put up a policy on artificial intelligence, but there’s nothing concrete yet. 

We don’t even have a full-fledged policy even on disinformation yet. I remember joining the Nigerian Fact Checkers Coalition in a discussion with the National Information Technology Development Agency [NITDA]) trying to set up a code of practice for Interactive Computer Service Platforms/Internet Intermediaries and their agents in Nigeria. We do not have an AI policy yet in Nigeria. It’s only recently that the Minister of Communication, Innovation, and Digital Economy put out a call to reach out to researchers and AI experts of Nigerian descent to see how they can put up a regulation for the country.I’m also aware that the United Nations Information Centre Office (UNIC) in Nigeria is working on getting inputs from stakeholders on a UN Code of Conduct for Information Integrity on Digital Platforms. But that is still a work in progress. Obviously, we don’t have robust regulation yet because people are still trying to understand the problem. Compared to other elections, the 2023 election is the election where we’ve seen examples of deepfakes being used, a new challenge that the government and journalists are trying to respond to. 

In Nigeria, what stakeholders are trying to do is to see if they can work in partnership with platforms to get support in debunking or labeling of content so that people have more awareness and are able to tell which media is AI-generated.

Q: What types of prevention or mitigation approaches might help make a difference in the future?

A: A multi-stakeholder approach that builds collaborations with platforms, journalists, and civil society to advocate for labeling of AI-generated content whilst also deploying media literacy as a tool that can help develop news judgment on platforms where users engage with this content. 

On the policy side, it is important to ensure that platforms invest sufficiently in information integrity efforts during elections. That’s something that‘s missing when it comes to elections in the Global South, especially in Africa. In the last election, we didn’t see sufficient investments for countering disinformation from platforms and creators of these tools. 

A group of journalists and verified International Fact-Checking Network (IFCN) signatories came together to form the Nigerian Fact Checkers Coalition (NFC) during the last elections, but we had limited resources and no support from the platforms. Even before the elections, Twitter closed its office in Africa; it’s like Africa is always an afterthought. There should also be an emphasis on researchers having access to data that can be used to measure the impact of synthetic media and also make platforms accountable. 

Q: Were deceptive deepfakes used in any official campaign communications, such as commercials?

A: No, there is none that I’m aware of. 

Q: In October, the Meta Oversight Board announced it would review Meta’s decision not to remove a shallow fake video of a U.S. politician; presently, the scope of Meta’s manipulated media policy is limited to media produced using AI. This type of scoping is also reflected in some of the efforts at regulation in the U.S. and abroad. In the election, how widespread were shallow fakes compared to deepfakes? 

A: Shallow fakes were more widespread than deepfakes during the election. They were easier for more people to make;  deepfakes required greater skill and access. However, although shallow fakes were more common, deepfakes had a more pronounced impact on public discourse. 

Q: Is there visibility into the specific tools that were used to produce deepfakes? 

A: There is no visibility of the particular deepfake tools that were used. The fact checks generally mentioned that they were created with AI.

Q: What were the top platforms used to distribute manipulated media? 

A: WhatsApp is the number one platform for distribution, especially for audio deepfakes. WhatsApp is the most popular messaging platform in Nigeria with over 51 million users according to recent data from Nigeria’s Minister for Communication and Digital Economy. Deepfakes also made the rounds on Twitter, Facebook and TikTok. TikTok is another platform that we are monitoring for disinformation, especially among young social media users. 

Q: As you know, the proliferation of materially deceptive media also brings the risk of collateral damage to information integrity, the so-called “Liar’s Dividend.” Basically, people can more persuasively assert or choose to believe that, say, an authentic video was falsely created. Have you seen instances of this in Nigerian politics? 

A: We are seeing cases of Liar’s Dividend. There was a corruption case where a former state governor was captured on video receiving bribes; he denied the action, saying the video was manipulated. Another example was during the election when (then president) Bola Tinubu was captured on video making an incoherent statement at a town hall meeting where he was struggling to mutter a word that appeared like hullabaloo. 

His media handlers said the video was manipulated, but it was not. The same was said for a leaked audio conversation of Peter Obi with a famous pastor about the elections – Obi denied it and said the audio was manipulated. It’s becoming a trend for political actors to point to AI manipulation even when they are caught with visual or audio evidence. 

Q: From a regional standpoint, what are you observing? For example, you shared with me the deepfake of Zambian president Hikainde Hichilema (falsely) announcing that he wouldn’t seek re-election. 

A: Outside Nigeria, aside from the Zambian example that you mentioned, we’ve seen the use of deepfake videos by pro-Junta activists in Burkina Faso. We’ve seen cases of deepfakes being weaponized in Kenya. In South Africa, scammers are using deepfake technology to promote cryptocurrency schemes. There was also a shallow fake video involving South Africa’s president Cyril Ramaphosa. Deepfake scams are used in cyberbullying and smear campaigns. Politicians are also using the technology for smear campaigns, and it’s predictable that the trend will continue to grow, especially with the burgeoning youth population and rising internet penetration. 

Q: In closing – and with the upcoming raft of elections for 2024 in mind, what else would you like people to know about efforts to combat manipulated media in elections and governance? 

A: Perhaps to talk about the need for journalists and civil societies in the developing world to have a support structure for mitigating and debunking synthetic media created with deepfake technologies. 

I think there should be a balance of power when it comes to responses from platforms. Africa should not be an afterthought. Africa also deserves to get sufficient response from tech platforms when it comes to issues of accountability and transparency. We also want to see an equal amount of investment and attention in mitigating the effect of deepfakes and other forms of disinformation, like is being done in the Americas and the EU. We want to see more engagement with government, civil society, media, and academia. Also, researchers and academics on the African continent need to have access to data and APIs that they can use to study disinformation. The balance of power when it comes to response and mitigation efforts is an issue that really needs to be addressed.