DMR News

Advancing Digital Conversations

Elon Musk Shares Doctored Kamala Harris Video on X, Sparking AI Concerns

ByHilary Ong

Jul 30, 2024

Elon Musk Shares Doctored Kamala Harris Video on X, Sparking AI Concerns

Elon Musk recently shared a doctored video of Vice President Kamala Harris on his social media platform, X. The video, which has been modified using AI, includes a deepfake voiceover that falsely portrays Harris making disparaging remarks about herself and President Joe Biden.

Phrases such as “I was selected because I am the ultimate diversity hire” and “I had four years under the tutelage of the ultimate deep state puppet, a wonderful mentor, Joe Biden,” are included in the manipulated content. The post lacks any indication that the video has been altered, which contravenes X’s policies against misleading media.

The video, originally posted by YouTuber Mr. Reagan under the username @MrReaganUSA, includes the label “parody.” However, Musk’s repost only states, “This is amazing,” accompanied by a laughing emoji, with no disclaimer about its authenticity. The post has accumulated over 123 million views as of early Sunday afternoon.

Despite its misleading nature, X has not labeled the post as misleading, nor has it added Community Notes, though several suggestions for such notes have been made.

Rising Concerns About AI’s Role in Politics

The use of AI to create lifelike but deceptive media has become a growing concern as the U.S. presidential election approaches. The altered video, which uses visuals from a genuine campaign ad released by Harris, raises questions about the regulation of AI-generated content in politics.

While some states have enacted rules governing the use of AI in political campaigns, federal regulations remain limited. Experts like Hany Farid, an analytical criminologist at the University of California, Berkeley, and Rob Weissman, co-president of Public Citizen, have expressed concerns about the potential misuse of AI technologies like voice cloning.

The incident underscores the need for more robust oversight and regulation of AI tools, particularly as they relate to political content. Farid emphasized that companies providing AI services should implement stronger safeguards to prevent misuse, while Weissman highlighted the potential for such content to mislead the public.

The federal government’s lack of action leaves regulation largely to individual states and social media platforms. According to the National Conference of State Legislatures, over one-third of U.S. states have enacted regulations to protect the integrity of campaigns and elections from AI-generated content.


Featured Image courtesy of Jim Vondruska/Getty Images

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *