Fact or Fiction: Combatting Deepfakes During an Election Year

With technology accelerating at a breakneck pace and elections looming, the rise of new AI-powered technologies brings into question how prepared we are to combat misinformation.

October 25, 2024

Fact or Fiction: Combatting Deepfakes During an Election Year
(Credits: Lightspring/Shutterstock.com)

2024 is one of the biggest election years worldwide and the first to feature an unprecedented amount of AI-powered influence. Pavel Goldman-Kalaydin of Sumsub, says as public and private sectors move to address emerging threats, collaboration in the AI community is key to finding effective solutions. 

Over 60 countries representing more than half of the world’s population are headed for the voting booths in 2024. With elections looming, the rise of new AI-powered technologies brings into question how prepared we are to combat election misinformation and maintain voter’s confidence in a fair, transparent, and democratic process. The explosion of advances in AI makes it easier than ever to generate and share falsified information in the form of text, audio, video, and images. The democratization and accessibility of this technology drastically lowered the barrier of entry to creating highly realistic faked content, allowing nearly everyone access to a powerful tool that could shape political narratives.

The True Reach of Deepfakes

The World Economic Forum’s 2024 Global Risks ReportOpens a new window  found that 53% of global experts named AI-generated misinformation and disinformation the second leading risk. The next biggest risk: societal and/or political polarization at 46%. Misinformation and disinformation will rise to the top within the next two years. Already, AI-generated content is a catalyst for political conflict. Candidates and constituents leverage AI tools to influence election results by disseminating misinformation and swaying public opinion, potentially altering the course of history.

Examples of this occur globally, from the deepfaked video advertisements depicting UK Prime Minister Rishi Sunak promoting a scam investment platform to the AI-generated robocalls mimicking President Biden’s voice to urge New Hampshire voters away from the presidential primaries. Slovakia’s 2023 presidential election showcases the tangible impact of deepfakes. Days before the critical day, an AI-generated audio recording of a top candidate boasting about rigging the election went viral on social media, leading to his defeat. While this wasn’t the first instance of deepfakes spreading misinformation, it is an ominous indicator of what other nations and government officials must prepare for as election season approaches.

Addressing the Root Cause

The problem at hand consists of two primary facets: the generation of deepfakes and their subsequent distribution. As AI technologies continue to develop and become more widely accessible, so do the emerging threats associated with AI. Already, deepfakes are frequently used to impersonate individuals for fraudulent activity. Globally, a tenfold increase in deepfakes was detected between 2022 and 2023. Breaking it down by region, North America alone saw a 1740%Opens a new window increase in deepfakes, followed by APAC at 1530%. We interact with the technology that brought this surge of deepfake utilization daily – from face swap filters on social media to AI applications on our phones.

As the technology becomes more powerful and the output more realistic, the risks of AI-generated content being used for malicious purposes to share falsified information increase. In the context of elections, the risk lies in uncontrollable deepfaked content that could promote political propaganda and sway public opinion. While the generation of falsified content remains a concern, the distribution of such content is an even more pressing threat. Arguably, the bigger danger lies not in the deepfaked videos of politicians but in manipulating them to construct false narratives.

Public Perception Isn’t Helping

In the US, the threat of deepfakes comes during a historic low in trust in governmental and political institutions, exacerbating the effects of misinformation and disinformation. According to Pew Research, trust in the US federal government stands among the lowest levels, dating back nearly seven decades. This is reflected in the growing distrust of politicians and the US Supreme Court, as favorable views of the Court have fallen to the lowest point in three decades. Globally, the OECD found public perception of government integrity is low, with nearly 50% of respondents doubting the integrity of elected or appointed officials. Coupled with the current political polarization and instability pushing voters further into their parties, the effects of false information supported by AI-generated content are sure to be wide-ranging.

Existing Defenses

Governments and companies are implementing legislation and technologies to combat online disinformation and misinformation. Major tech companies, including OpenAI, Google, and Meta, seek to limit the damage their technologies could inflict through digital watermarking and content disclosure labels. Social media platforms are taking steps to disclose the use of AI-generated material to protect users from misinformation and disinformation.

Government officials are racing to implement regulations and policies to ensure businesses, developers, and researchers keep safety at the forefront in further enhancing AI. Europe’s Online Safety Act holds platforms legally responsible for removing illegal misinformation. The Electoral Commission’s Digital Imprints guidanceOpens a new window will require political AI-generated material to carry a clear digital imprint. The Federal Communications Commission banned AI-generated voices in robocalls following the case in New Hampshire. Additionally, the Biden Administration enacted requirements for overseeing the development of safe, secure, and trustworthy AI models and systems. The administration also appointed AI officials in each federal agency.

Improving Deepfake Detection

Despite the current measures, the efficacy of such policies remains questionable, and many wonder if they’re too late. Even with embedded digital watermarking and labeled content, detecting malicious deepfakes remains challenging. We live in a world where it’s increasingly easy to create a falsified reality or dismiss it as fake. It is easy to confuse the public about what is true and what is not. It’s even more difficult when figures of authority seem to push the narrative. The current lack of policies leaves voters ill-informed, fostering the risk of making poorly informed decisions.

Companies and media platforms must take charge of implementing mandatory checks on content for AI or deepfake generation. Enforcing policies to guarantee the authenticity of content helps shield users from misinformation and disinformation. Another approach includes user verification, in which verified users would bear responsibility for the authenticity of the visual content. In contrast, non-verified users would be clearly marked, urging users to exercise caution in trusting such content.

Users should educate themselves on the risks of deepfakes as agencies work to protect them this year. Previous tips for detecting deepfakes may not be relevant anymore. With technology accelerating at a breakneck pace, the public needs to stay informed and aware. Working with AI experts to develop models for deepfake prevention is crucial to ensure secure elections.

MORE ON ELECTION TECH

Pavel Goldman Kalaydin
Pavel is the Head of Artificial Intelligence and Machine Learning at Sumsub, a global Know-Your-Customer (KYC)/AML/Antifraud company. He oversees the development of technologies for preventing financial fraud, models for detecting deepfakes, and improving document intelligence. With a background in software engineering, he has been working with AI and ML for over 10 years.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.