DMR News

Advancing Digital Conversations

xAI Attributes Grok’s White Genocide Comments to Unauthorized Modification

ByHilary Ong

May 19, 2025

xAI Attributes Grok’s White Genocide Comments to Unauthorized Modification

xAI, Elon Musk’s artificial intelligence company, is looking into its chat bot Grok’s responses. An incomplete and unauthorized change allowed Grok to respond to inflammatory comments about white genocide in South Africa. The incident unfolded on May 14 at approximately 3:15 AM PST, when a change to Grok’s prompt directed it to respond specifically to political topics. This change resulted in Grok making comments about white genocide even when posts had absolutely nothing to do with the topic.

Unauthorized Change Raises Concerns

The third-party unauthorized tweak was highly alarming in terms of Grok’s programming integrity and reliability. xAI stated that the alteration “violated [its] internal policies and core values.” Following the attack, Grok began replying to most of Grok’s posts on X, formerly Twitter. Had his replies featured harmful and dangerous misinformation, many users would have been shocked and outraged.

This incident is the second time xAI has admitted publicly that Grok’s code is problematic. Even non-manipulative input changes resulted in troubling system responses. Nevertheless, in February, Grok came under heavy fire. The platform faced backlash for censoring derogatory statements about public figures such as Donald Trump and Elon Musk. A rogue employee allegedly told the bot to exclude sources that mentioned these folks peddling disinformation.

xAI Responds with Investigation and Action

After the recent scary incident, xAI immediately instituted a comprehensive external investigation. They quickly rolled back the unexpected change after users began to flag Grok’s alarming responses. As the third largest shareholder, the company is promoting boardroom transparency! They intend to upload Grok’s system prompts and a comprehensive changelog to GitHub. The purpose of these civil enforcement actions is to ensure that similar media companies do not make the same mistakes in the future.

xAI released a self-imposed deadline to complete and release an AI safety framework by the end of this month. Unfortunately, the company has failed on this timeline. The lack of any safety guide makes clear that more oversight is desperately needed. We need to get better handle AI’s ways of responding—what it can or cannot do—then, at this point.

Author’s Opinion

This incident underscores the critical need for better regulation and oversight in AI development. The unchecked manipulation of AI models not only undermines public trust but also poses significant risks in spreading harmful misinformation. It’s clear that companies like xAI must prioritize safety measures and transparency to prevent these issues from recurring.


Featured image credit: Wonderverse Indonesia

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *