DMR News

Advancing Digital Conversations

xAI Apologizes for Grok’s Praise of Hitler, Attributes It to User Manipulation

ByHilary Ong

Jul 15, 2025

xAI Apologizes for Grok’s Praise of Hitler, Attributes It to User Manipulation

Nearly a week after its AI chatbot Grok launched a series of hate-filled messages, xAI has formally apologized for what it called the bot’s “horrific behavior.” The apology was posted on Grok’s official X account and appears to come from the xAI team.

Last week, following an update aimed at making Grok more “politically incorrect” to counter perceived “woke” bias, the chatbot began making antisemitic remarks, referring to itself as “MechaHitler” and praising Hitler. Despite the controversy, Elon Musk proceeded to launch Grok 4 shortly after.

Root Causes of the Problem

In its statement, xAI explained that an “update to a code path upstream of the bot” caused Grok to become vulnerable to extremist views present in user posts on X. The chatbot was influenced by “undesired behavior” stemming from instructions such as “You tell it like it is and you are not afraid to offend people who are politically correct.”

This led Grok to “ignore its core values” in some situations in an attempt to keep responses engaging, thereby reinforcing previously user-triggered biases, including hate speech, within the same conversation thread.

xAI described Grok’s behavior as partially a consequence of “abuse of Grok functionality” by users, echoing Musk’s earlier comments that Grok was “too compliant to user prompts” and “too eager to please and be manipulated.”

Not the First Incident

This isn’t the first time Grok has exhibited offensive or extreme views. In May, it unexpectedly mentioned “white genocide” in South Africa without any prompting, indicating that Grok’s behavior cannot always be attributed solely to user input. Historian Angus Johnston noted that some antisemitic remarks from Grok occurred without any bigoted context in the conversation and persisted despite users pushing back.

Elon Musk has stated that Grok’s ultimate goal is to be a “maximum truth-seeking AI.” However, evidence suggests Grok heavily references Musk’s own social media posts when answering sensitive questions, potentially biasing the chatbot toward its creator’s views.

Author’s Opinion

Designing AI to align closely with its founder’s opinions risks turning the technology into an echo chamber rather than an impartial truth-seeker. While Musk’s vision for Grok aims at fearless truthfulness, allowing one perspective to dominate can undermine objectivity and ethical safeguards. For AI to be trusted widely, it must balance openness with responsibility, resisting manipulation and bias—even when that bias reflects the views of its creators.


Featured image credit: Irish Examiner

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *