Adapted from Hafiz Malik’s article, We Need to Get Serious About Election Deepfakes, originally published in The Hill on September 3, 2024.
The 2024 election season has brought a spotlight on a troubling reality: deepfakes are no longer hypothetical threats; they are reshaping how we perceive truth online. A stark reminder came when President Biden announced his withdrawal from the presidential race in July. Simultaneously, social media platforms buzzed with a letter, allegedly from Biden, sparking confusion over its authenticity. In real-time, audiences were forced to question: was this real, or just another deepfake?
As a computer engineer who has spent over 15 years studying deepfakes, I find it both heartening and alarming that skepticism is becoming a reflex for some. However, while AI experts and tech-savvy audiences may question what they see, the average American might not. This is where the danger lies.
Deepfake-enabled disinformation has already altered election dynamics globally, from South Africa to India. In the U.S., the fake audio clip of Vice President Kamala Harris, released shortly after she became the presumptive Democratic nominee, demonstrated how such manipulation could influence public opinion here as well. Despite the severity of this threat, U.S. efforts to combat deepfakes remain fragmented. Tech companies, regulators, and policymakers have adopted reactive, band-aid measures rather than comprehensive solutions. For instance, some commercial software companies now restrict the creation of deepfake models using high-profile figures. While this may deter casual misuse, it does little to stop determined bad actors who can easily bypass such measures. Ironically, these restrictions also hinder researchers like me, who strive to stay ahead of these malicious advancements
The Path Forward: A Coordinated Approach
Addressing this growing threat requires more than isolated efforts. Here are some key steps that stakeholders must take:
- Sophisticated Detection and Watermarking: Generative AI companies need to embed robust digital watermarking in their content. By embedding identifiable markers in every AI-generated file, we can trace its origin and ensure transparency. Additionally, AI platforms must develop more sophisticated detection tools to counteract the subtle distortions that fool current systems.
- Social Media Accountability: Social media platforms must deploy advanced tools to identify and remove disinformation swiftly. Federal regulators must enforce accountability, ensuring that these companies prioritize combating the spread of deepfakes over profit.
- Legislative Action: The U.S. should follow the European Union’s lead, which recently passed laws requiring that all deepfakes be labeled and employ detection mechanisms like watermarking. A bipartisan Senate bill proposing a ban on AI deception in political ads is a step in the right direction, but more decisive action is needed to address the broader issue of deepfake proliferation on social media.
- Public Awareness Campaigns: The federal government should educate the public about the dangers of deepfakes and teach individuals how to identify manipulated content. Awareness campaigns can empower users to critically assess the authenticity of what they see online.
A Call to Action
The 2024 election will undoubtedly see a surge in deepfakes, spreading faster and more convincingly than ever before. While some measures, like public awareness and improved detection tools, can be deployed before Election Day, we must also prepare for the long-term battle against this technology.
Deepfakes pose a clear and present danger to our democracy. Combating them requires a unified, all-encompassing approach—piecemeal efforts are no longer sufficient. From tech companies to lawmakers, every stakeholder must recognize the urgency and act decisively. Only then can we hope to safeguard the integrity of our elections and our shared reality.
Link to the article: https://thehill.com/opinion/technology/4859312-we-need-to-get-serious-about-election-deepfakes-now/