Adapted from PolitiFact FL: Social media accounts use AI-generated audio to push 2024 election misinformation, originally published by Central Florida Public Media.
As AI technology evolves, so does its potential to influence major events, including elections. The 2024 presidential race has already seen a surge in AI-generated audio spreading false claims and misleading narratives, often targeting high-profile political figures. Platforms like TikTok, YouTube, Facebook, and X are at the forefront of this disinformation battle, with fabricated content designed to manipulate public perception.
AI-Generated Misinformation on Social Media
In February, a viral TikTok video falsely claimed that Donald Trump had withdrawn from the presidential race. The audio, suspected to be AI-generated, was convincing enough to amass over 161,000 views before being removed. This was just one of several videos identified by PolitiFact’s partnership with TikTok to combat inauthentic content.
Generative AI tools simplify the creation of false audio, making it harder for users to discern between real and fake content. Unlike video deepfakes, which can sometimes reveal inconsistencies in visuals, AI-generated audio lacks obvious cues, making it a subtler yet equally potent weapon of disinformation.
The Role of AI Detection Experts
To analyze these videos, PolitiFact consulted generative AI experts like Hafiz Malik, a professor at the University of Michigan-Dearborn specializing in deepfake detection. Malik’s AI detection tool classified four out of five TikTok videos as containing synthetic, AI-generated audio. The fifth video showed mixed results, with some segments flagged as deepfake.
Malik highlights the monotonic nature of AI-generated voices, noting that these audios often lack the emotional depth and natural pacing of human speech. “They lack the rise and fall in the audio that you typically have when you talk,” Malik explained. These subtleties can help identify manipulated content, even as AI continues to improve.

Platforms Under Scrutiny
TikTok’s harmful misinformation policies require users to label AI-generated content, but experts like Jack Brewster from NewsGuard observed many users bypassing these guidelines. Similarly, YouTube has strict community guidelines that mandate disclosures for AI-generated election materials. Yet, these platforms still struggle to curb the spread of misleading content, which often migrates to secondary platforms like Facebook and X.
AI’s ability to mass-produce content at low cost also enables bad actors to remain anonymous. This anonymity diminishes accountability, further fueling the spread of disinformation.
How AI Misinformation Operates
The targeted use of AI-generated content extends beyond the U.S. In Indonesia, AI-generated cartoons rehabilitated a controversial political figure’s image ahead of elections, showing how generative AI can reshape public narratives globally.
On platforms like TikTok, AI-generated audio has been used to make outrageous claims, such as Trump suffering a heart attack or Supreme Court Justice Clarence Thomas removing candidates from the 2024 race. These videos often feature eye-catching visuals and disembodied narration, leveraging emotional appeal to draw viewers.

Combating the Misinformation Crisis
Experts like Malik emphasize the importance of critical listening and cross-checking information to identify AI-generated audio. Detection tools, while imperfect, are improving, but generative AI continues to evolve, making the arms race between creation and detection more intense.
To counter this challenge, platforms must enforce stricter regulations and transparency measures. Meanwhile, users are encouraged to stay vigilant, verifying content through multiple sources and paying attention to linguistic inconsistencies and audio anomalies.
A Call for Responsibility
The democratization of AI technology has empowered creators but also enabled the mass production of disinformation. Malik warns of the dangers this poses: “Now, anyone with access to the internet can have the power of thousands of writers, audio technicians, and video producers, for free, and at the push of a button.”
As the 2024 election nears, the stakes are high. Combating AI-generated misinformation will require collaborative efforts from tech companies, policymakers, and the public to ensure a fair and transparent democratic process.
Link to the article: https://www.cfpublic.org/politics/2024-03-19/politifact-fl-social-media-accounts-use-ai-generated-audio-to-push-2024-election-misinformation