Adapted from Will hackers, trolls and AI deepfakes upset the 2024 election?, originally published by The Los Angeles Times.

The use of artificial intelligence (AI) in generating misinformation has reached an alarming level, with the 2024 election shaping up to be a battleground for hackers, trolls, and deepfake technology. What once seemed like science fiction is now a pressing reality, as AI-generated disinformation targets voters with unparalleled precision and scale.

For Hafiz Malik, professor of electrical and computer engineering at the University of Michigan-Dearborn, the rapid evolution of deepfake technology poses an unprecedented threat to democracy. Malik, who has spent years analyzing deepfakes, emphasizes how tools that were once available only to state-sponsored enterprises are now accessible to virtually anyone. “The scale of disinformation is unimaginable. The cost of production and dissemination is minimal,” Malik said.

The Rise of AI-Generated Misinformation

Deepfakes have become a go-to tool for bad actors aiming to sow confusion and distrust. Malik points to recent cases, including a deepfake video of Ukrainian President Volodymyr Zelensky ordering his forces to surrender during Russia’s invasion, as clear examples of how AI can be weaponized.

“Truth itself will be hard to decipher,” said Drew Liebert, director of the California Initiative for Technology and Democracy. Liebert warns of scenarios where AI-generated robocalls spread false voting information to millions, potentially altering election outcomes.

Malik concurs, highlighting that deepfakes are increasingly sophisticated, with creators adding noise and distortion to make them harder to detect on platforms like X (formerly Twitter) and Facebook. However, he notes that AI still struggles to perfectly replicate speech patterns, facial expressions, and emotions, which leaves detectable traces for experts like him to uncover.

A New Frontier for Disinformation

AI-generated content isn’t limited to political campaigns. It has been used to manipulate global narratives, as seen in a Chinese disinformation ploy falsely claiming the U.S. caused the 2023 Maui wildfires to test a military-grade weather weapon. Malik emphasizes that the democratization of AI tools has emboldened bad actors, allowing them to target swing states and key demographics with tailored disinformation campaigns.

“People know your preferences down to your footwear,” said former U.S. Attorney Barbara McQuade, illustrating how data collected by social media platforms fuels micro-targeted disinformation. Malik agrees, noting that AI-generated messages can influence voter behavior in critical moments, such as false claims about polling place closures.

The Call for Regulation

In response to these threats, policymakers are introducing legislation to curb AI-generated disinformation. California lawmakers have proposed bills requiring watermarks and digital provenance in AI-generated content. But Malik remains skeptical of the tech industry’s commitment to solving this problem. “I cannot believe that multibillion- and trillion-dollar companies are unable to solve this problem,” he said. “Their business model is about more shares, more clicks, more money.”

While some measures, like Meta and Google’s pledges to label AI-generated political content, are a step in the right direction, Malik argues that they are far from sufficient. He emphasizes the need for robust detection tools and greater accountability from social media platforms.

A Glimpse into the Future

Despite its limitations, Malik predicts that AI technology will continue to improve, potentially making deepfakes indistinguishable from reality. “Things that were impossible a few years back are possible now,” he said. However, he remains optimistic that with the right tools and public awareness, the impact of AI-driven disinformation can be mitigated.

The stakes are high as we approach the 2024 election. Whether through legislative action, technological innovation, or grassroots efforts to educate the public, combating AI-generated disinformation will require a coordinated, all-hands-on-deck approach. Link to the article: https://www.latimes.com/politics/story/2024-04-30/will-a-i-deepfake-videos-and-robocalls-threaten-the-2024-election

Adapted from PolitiFact FL: Social media accounts use AI-generated audio to push 2024 election misinformation, originally published by Central Florida Public Media.

As AI technology evolves, so does its potential to influence major events, including elections. The 2024 presidential race has already seen a surge in AI-generated audio spreading false claims and misleading narratives, often targeting high-profile political figures. Platforms like TikTok, YouTube, Facebook, and X are at the forefront of this disinformation battle, with fabricated content designed to manipulate public perception.

AI-Generated Misinformation on Social Media

In February, a viral TikTok video falsely claimed that Donald Trump had withdrawn from the presidential race. The audio, suspected to be AI-generated, was convincing enough to amass over 161,000 views before being removed. This was just one of several videos identified by PolitiFact’s partnership with TikTok to combat inauthentic content.

Generative AI tools simplify the creation of false audio, making it harder for users to discern between real and fake content. Unlike video deepfakes, which can sometimes reveal inconsistencies in visuals, AI-generated audio lacks obvious cues, making it a subtler yet equally potent weapon of disinformation.

The Role of AI Detection Experts

To analyze these videos, PolitiFact consulted generative AI experts like Hafiz Malik, a professor at the University of Michigan-Dearborn specializing in deepfake detection. Malik’s AI detection tool classified four out of five TikTok videos as containing synthetic, AI-generated audio. The fifth video showed mixed results, with some segments flagged as deepfake.

Malik highlights the monotonic nature of AI-generated voices, noting that these audios often lack the emotional depth and natural pacing of human speech. “They lack the rise and fall in the audio that you typically have when you talk,” Malik explained. These subtleties can help identify manipulated content, even as AI continues to improve.

Platforms Under Scrutiny

TikTok’s harmful misinformation policies require users to label AI-generated content, but experts like Jack Brewster from NewsGuard observed many users bypassing these guidelines. Similarly, YouTube has strict community guidelines that mandate disclosures for AI-generated election materials. Yet, these platforms still struggle to curb the spread of misleading content, which often migrates to secondary platforms like Facebook and X.

AI’s ability to mass-produce content at low cost also enables bad actors to remain anonymous. This anonymity diminishes accountability, further fueling the spread of disinformation.

How AI Misinformation Operates

The targeted use of AI-generated content extends beyond the U.S. In Indonesia, AI-generated cartoons rehabilitated a controversial political figure’s image ahead of elections, showing how generative AI can reshape public narratives globally.

On platforms like TikTok, AI-generated audio has been used to make outrageous claims, such as Trump suffering a heart attack or Supreme Court Justice Clarence Thomas removing candidates from the 2024 race. These videos often feature eye-catching visuals and disembodied narration, leveraging emotional appeal to draw viewers.

Combating the Misinformation Crisis

Experts like Malik emphasize the importance of critical listening and cross-checking information to identify AI-generated audio. Detection tools, while imperfect, are improving, but generative AI continues to evolve, making the arms race between creation and detection more intense.

To counter this challenge, platforms must enforce stricter regulations and transparency measures. Meanwhile, users are encouraged to stay vigilant, verifying content through multiple sources and paying attention to linguistic inconsistencies and audio anomalies.

A Call for Responsibility

The democratization of AI technology has empowered creators but also enabled the mass production of disinformation. Malik warns of the dangers this poses: “Now, anyone with access to the internet can have the power of thousands of writers, audio technicians, and video producers, for free, and at the push of a button.”

As the 2024 election nears, the stakes are high. Combating AI-generated misinformation will require collaborative efforts from tech companies, policymakers, and the public to ensure a fair and transparent democratic process.

Link to the article: https://www.cfpublic.org/politics/2024-03-19/politifact-fl-social-media-accounts-use-ai-generated-audio-to-push-2024-election-misinformation

Adapted from Hafiz Malik’s article, We Need to Get Serious About Election Deepfakes, originally published in The Hill on September 3, 2024.

The 2024 election season has brought a spotlight on a troubling reality: deepfakes are no longer hypothetical threats; they are reshaping how we perceive truth online. A stark reminder came when President Biden announced his withdrawal from the presidential race in July. Simultaneously, social media platforms buzzed with a letter, allegedly from Biden, sparking confusion over its authenticity. In real-time, audiences were forced to question: was this real, or just another deepfake?

As a computer engineer who has spent over 15 years studying deepfakes, I find it both heartening and alarming that skepticism is becoming a reflex for some. However, while AI experts and tech-savvy audiences may question what they see, the average American might not. This is where the danger lies.

Deepfake-enabled disinformation has already altered election dynamics globally, from South Africa to India. In the U.S., the fake audio clip of Vice President Kamala Harris, released shortly after she became the presumptive Democratic nominee, demonstrated how such manipulation could influence public opinion here as well. Despite the severity of this threat, U.S. efforts to combat deepfakes remain fragmented. Tech companies, regulators, and policymakers have adopted reactive, band-aid measures rather than comprehensive solutions. For instance, some commercial software companies now restrict the creation of deepfake models using high-profile figures. While this may deter casual misuse, it does little to stop determined bad actors who can easily bypass such measures. Ironically, these restrictions also hinder researchers like me, who strive to stay ahead of these malicious advancements

The Path Forward: A Coordinated Approach

Addressing this growing threat requires more than isolated efforts. Here are some key steps that stakeholders must take:

  1. Sophisticated Detection and Watermarking: Generative AI companies need to embed robust digital watermarking in their content. By embedding identifiable markers in every AI-generated file, we can trace its origin and ensure transparency. Additionally, AI platforms must develop more sophisticated detection tools to counteract the subtle distortions that fool current systems.
  1. Social Media Accountability: Social media platforms must deploy advanced tools to identify and remove disinformation swiftly. Federal regulators must enforce accountability, ensuring that these companies prioritize combating the spread of deepfakes over profit.
  1. Legislative Action: The U.S. should follow the European Union’s lead, which recently passed laws requiring that all deepfakes be labeled and employ detection mechanisms like watermarking. A bipartisan Senate bill proposing a ban on AI deception in political ads is a step in the right direction, but more decisive action is needed to address the broader issue of deepfake proliferation on social media.
  1. Public Awareness Campaigns: The federal government should educate the public about the dangers of deepfakes and teach individuals how to identify manipulated content. Awareness campaigns can empower users to critically assess the authenticity of what they see online.

A Call to Action

The 2024 election will undoubtedly see a surge in deepfakes, spreading faster and more convincingly than ever before. While some measures, like public awareness and improved detection tools, can be deployed before Election Day, we must also prepare for the long-term battle against this technology.

Deepfakes pose a clear and present danger to our democracy. Combating them requires a unified, all-encompassing approach—piecemeal efforts are no longer sufficient. From tech companies to lawmakers, every stakeholder must recognize the urgency and act decisively. Only then can we hope to safeguard the integrity of our elections and our shared reality.

Link to the article: https://thehill.com/opinion/technology/4859312-we-need-to-get-serious-about-election-deepfakes-now/

Adapted from AP News’ article, New AI voice-cloning tools ‘add fuel’ to misinformation fire, originally published on February 10, 2023.

The rise of artificial intelligence-powered voice-cloning tools has ignited widespread concerns about the potential for misinformation to spread like wildfire. A recent incident involving a doctored video of President Joe Biden illustrates just how easily these tools can be misused. The manipulated clip showed Biden making disparaging remarks about transgender people, sparking online debates before many users recognized the video as a fake.

Digital forensics experts warn that such tools are becoming increasingly accessible, enabling even non-experts to generate highly convincing “deepfake” audio. These advancements highlight the alarming ease with which misinformation can be created and shared, posing serious threats to society.

“Tools like this are going to basically add more fuel to the fire,” said Hafiz Malik, a professor specializing in multimedia forensics at the University of Michigan-Dearborn. “The monster is already on the loose.”

The Accessibility of AI Voice-Cloning

In January, a new voice synthesis platform, ElevenLabs, unveiled its beta phase, allowing users to generate realistic audio of any individual by uploading samples and inputting text for the system to replicate. While the company claims to trace generated audio back to its creator, experts argue that these measures fall short of mitigating harm.

Hany Farid, a professor at UC Berkeley specializing in digital forensics, echoed these concerns, noting that bad actors could exploit these tools to manipulate markets, create fake news, or even fabricate geopolitical crises. “The damage is done,” Farid said.

To demonstrate how accessible these tools have become, the Associated Press tested a free online platform, generating audio mimicking the voices of actors Daniel Craig and Jennifer Lawrence in mere minutes.

The Evolving Sophistication of Misinformation

Deepfake technologies have advanced rapidly in just a few years. Early examples often failed to convince audiences due to unnatural speech patterns and robotic tones. However, today’s tools produce audio so realistic that even trained listeners can struggle to detect the forgery.

The Biden video, for example, combined genuine footage with synthetic audio, making it difficult to discern where reality ended and fabrication began. While most social media users recognized the manipulation, the clip’s rapid spread underscores how quickly misinformation can take root, especially when it aligns with preconceived biases.

Broader Implications of AI Misinformation

Voice-cloning is just one facet of the broader issue of AI-generated misinformation. Other tools, like AI image generators and text-based systems, are also contributing to a rapidly evolving landscape of disinformation. For instance, platforms like ChatGPT have faced scrutiny for their ability to produce seemingly authoritative content, which could be weaponized to mislead audiences.

Hafiz Malik summed up the challenge succinctly: “The question is where to point the finger and how to put the genie back in the bottle. We can’t do it.”

A Call for Action

As AI tools continue to democratize access to advanced content creation, the need for comprehensive safeguards becomes more urgent. Policymakers, technology companies, and researchers must collaborate to establish ethical guidelines, robust detection systems, and public awareness campaigns to counteract the growing threat of AI-driven misinformation.

The stakes are clear: failing to address these issues now could lead to even greater societal harm in the future. Link to the article: https://apnews.com/article/technology-science-fires-artificial-intelligence-misinformation-26cabd20dcacbd68c8f38610fec39f5b

Adapted from New AI voice-cloning tools ‘add fuel’ to misinformation fire, originally published by The Associated Press.

Artificial intelligence is reshaping the landscape of disinformation, with the latest voice-cloning tools making it easier than ever to generate lifelike deepfake audio. A recent incident involving a manipulated video of President Joe Biden highlighted the growing threat of this technology. The video combined AI-generated audio with genuine footage, creating a clip that falsely portrayed Biden making derogatory remarks about transgender people.

For Hafiz Malik, professor of electrical and computer engineering at the University of Michigan-Dearborn, this marks a critical turning point. “Tools like this are going to basically add more fuel to the fire,” Malik warned. “The monster is already on the loose.”

The Rise of Voice-Cloning Technology

Voice-cloning platforms like ElevenLabs have democratized access to advanced AI tools. Users can generate realistic audio by uploading a few minutes of speech samples and typing in the desired text. While the technology was initially designed for creative uses, such as dubbing movies and audiobooks, it has quickly become a tool for misuse.

Hillary Clinton, Bill Gates, and Emma Watson are among the public figures whose voices have been used in fake audio clips spreading disinformation. In one case, Watson’s voice was manipulated to recite Hitler’s manifesto, underscoring how this technology can be weaponized to provoke outrage and confusion.

Despite efforts by companies like ElevenLabs to curb abuse—such as requiring payment information and tracking generated audio—Malik remains skeptical about their effectiveness. “The question is where to point the finger and how to put the genie back in the bottle,” he said. “We can’t do it.”

The Evolution of Deepfake Misinformation

Deepfake technology has become far more sophisticated in recent years. Early deepfakes were relatively easy to spot due to robotic voices and unrealistic visuals, but today’s tools create content so realistic that even seasoned observers are often fooled.

The Biden clip, for example, featured audio synced perfectly with the president’s lip movements, making it hard to detect as fake. Malik highlights the challenges this poses, especially when bad actors use these tools to manipulate public perception on sensitive issues like transgender rights or foreign policy.

“The scale of disinformation is unimaginable,” Malik said. He notes that deepfakes can now be created and distributed with minimal cost and effort, enabling their rapid spread across social media.

The Broader Threat of AI-Generated Misinformation

Voice-cloning is just one facet of the larger problem. AI-powered tools like DALL-E and Midjourney can generate photorealistic images with simple prompts, while text generators like ChatGPT have been blocked in some schools for their ability to create convincing student essays.

Malik emphasizes that the accessibility of these tools has emboldened bad actors, from lone individuals to state-sponsored entities. “Hollywood studios have long been able to distort reality,” Malik explained. “But now, that technology is in the hands of anyone with an internet connection.”

A Call to Action

As Malik and other experts have pointed out, combating AI-generated disinformation will require more than reactive measures. Companies must invest in advanced detection tools and implement stricter safeguards. Governments must also step in with robust legislation to regulate the misuse of AI.

The stakes are high. From false robocalls spreading voter suppression tactics to deepfake videos influencing global politics, the potential for harm is immense. Malik warns that without immediate action, the erosion of trust in institutions and media will only accelerate.

“Things that were impossible a few years back are possible now,” Malik said. “We must address these threats with urgency and collaboration.” Link to the article: https://www.seattletimes.com/nation-world/nation/new-ai-voice-cloning-tools-add-fuel-to-misinformation-fire/