New AI Voice-Cloning Tools ‘Add Fuel’ to the Misinformation Fire

Adapted from New AI voice-cloning tools ‘add fuel’ to misinformation fire, originally published by The Associated Press.

Artificial intelligence is reshaping the landscape of disinformation, with the latest voice-cloning tools making it easier than ever to generate lifelike deepfake audio. A recent incident involving a manipulated video of President Joe Biden highlighted the growing threat of this technology. The video combined AI-generated audio with genuine footage, creating a clip that falsely portrayed Biden making derogatory remarks about transgender people.

For Hafiz Malik, professor of electrical and computer engineering at the University of Michigan-Dearborn, this marks a critical turning point. “Tools like this are going to basically add more fuel to the fire,” Malik warned. “The monster is already on the loose.”

The Rise of Voice-Cloning Technology

Voice-cloning platforms like ElevenLabs have democratized access to advanced AI tools. Users can generate realistic audio by uploading a few minutes of speech samples and typing in the desired text. While the technology was initially designed for creative uses, such as dubbing movies and audiobooks, it has quickly become a tool for misuse.

Hillary Clinton, Bill Gates, and Emma Watson are among the public figures whose voices have been used in fake audio clips spreading disinformation. In one case, Watson’s voice was manipulated to recite Hitler’s manifesto, underscoring how this technology can be weaponized to provoke outrage and confusion.

Despite efforts by companies like ElevenLabs to curb abuse—such as requiring payment information and tracking generated audio—Malik remains skeptical about their effectiveness. “The question is where to point the finger and how to put the genie back in the bottle,” he said. “We can’t do it.”

The Evolution of Deepfake Misinformation

Deepfake technology has become far more sophisticated in recent years. Early deepfakes were relatively easy to spot due to robotic voices and unrealistic visuals, but today’s tools create content so realistic that even seasoned observers are often fooled.

The Biden clip, for example, featured audio synced perfectly with the president’s lip movements, making it hard to detect as fake. Malik highlights the challenges this poses, especially when bad actors use these tools to manipulate public perception on sensitive issues like transgender rights or foreign policy.

“The scale of disinformation is unimaginable,” Malik said. He notes that deepfakes can now be created and distributed with minimal cost and effort, enabling their rapid spread across social media.

The Broader Threat of AI-Generated Misinformation

Voice-cloning is just one facet of the larger problem. AI-powered tools like DALL-E and Midjourney can generate photorealistic images with simple prompts, while text generators like ChatGPT have been blocked in some schools for their ability to create convincing student essays.

Malik emphasizes that the accessibility of these tools has emboldened bad actors, from lone individuals to state-sponsored entities. “Hollywood studios have long been able to distort reality,” Malik explained. “But now, that technology is in the hands of anyone with an internet connection.”

A Call to Action

As Malik and other experts have pointed out, combating AI-generated disinformation will require more than reactive measures. Companies must invest in advanced detection tools and implement stricter safeguards. Governments must also step in with robust legislation to regulate the misuse of AI.

The stakes are high. From false robocalls spreading voter suppression tactics to deepfake videos influencing global politics, the potential for harm is immense. Malik warns that without immediate action, the erosion of trust in institutions and media will only accelerate.

“Things that were impossible a few years back are possible now,” Malik said. “We must address these threats with urgency and collaboration.” Link to the article: https://www.seattletimes.com/nation-world/nation/new-ai-voice-cloning-tools-add-fuel-to-misinformation-fire/