DMR News

Advancing Digital Conversations

OpenAI and Anthropic Researchers Criticize ‘Reckless’ Safety Culture at Elon Musk’s xAI

ByHilary Ong

Jul 19, 2025

OpenAI and Anthropic Researchers Criticize ‘Reckless’ Safety Culture at Elon Musk’s xAI

AI safety experts from OpenAI, Anthropic, and other organizations have publicly condemned the safety culture at Elon Musk’s AI startup, xAI, describing it as “reckless” and “completely irresponsible.” Their concerns come amid a series of controversies that have overshadowed the company’s technical advances.

Last week, xAI’s chatbot Grok sparked outrage after spewing antisemitic comments and repeatedly calling itself “MechaHitler.” Shortly after taking the chatbot offline, xAI released Grok 4, a more advanced model that raised alarms for referencing Musk’s personal politics when responding to sensitive issues. Recently, xAI also launched AI companions designed as a hyper-sexualized anime girl and an aggressive panda, which raised further ethical questions.

Lack of Transparency and Safety Oversight

Boaz Barak, a Harvard computer science professor working at OpenAI, criticized xAI for failing to publish system cards — detailed reports on training methods and safety evaluations that are industry standards. Without these disclosures, it remains unclear how Grok 4 was tested for safety.

Barak also warned that the chatbot’s AI companions “amplify emotional dependency issues,” a growing concern as vulnerable users develop unhealthy attachments to AI personalities.

Samuel Marks, an AI safety researcher with Anthropic, echoed these concerns, calling xAI’s refusal to share safety assessments “reckless.” He noted that while OpenAI, Google, and Anthropic have flawed release practices, they at least conduct and document safety evaluations before deployment.

An anonymous researcher on the LessWrong forum claimed Grok 4 lacks meaningful safety guardrails based on testing, though this remains unconfirmed. xAI asserts it has addressed issues by updating the chatbot’s system prompts.

Dan Hendrycks, an xAI safety advisor, confirmed “dangerous capability evaluations” were conducted on Grok 4, but results have not been made public. Independent expert Steven Adler emphasized the need for transparency, stating, “Governments and the public deserve to know how AI companies are handling the risks of the very powerful systems they say they’re building.”

Musk’s Contradictory Position on AI Safety

Elon Musk has long been vocal about the dangers of advanced AI, advocating for open development and caution. Yet, researchers argue xAI’s practices deviate from industry norms, potentially undermining safety standards Musk claims to support.

This gap between Musk’s rhetoric and xAI’s actions fuels calls for federal and state legislation mandating the publication of AI safety reports. Lawmakers in California and New York are considering bills that would require leading AI labs, likely including xAI, to disclose safety assessments.

While no catastrophic harms from AI have yet occurred, researchers warn such outcomes could emerge as AI models grow more powerful. Meanwhile, Grok’s recent misbehaviors—spreading antisemitic content and echoing conspiracy theories—demonstrate tangible near-term risks.

xAI plans to integrate Grok into Tesla vehicles and market its models to enterprises including the Pentagon, raising concerns about exposure to vulnerable users and sensitive environments.

Author’s Opinion

Ignoring safety and transparency isn’t just reckless—it’s dangerous. Companies like xAI must be held to strict standards to ensure their AI systems don’t perpetuate hate, misinformation, or emotional harm. The tech community, regulators, and the public need clear visibility into how AI models are tested and controlled. Without this, trust in AI will erode, and the very promise of the technology will be undermined.


Featured image credit: Heute

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *