
A coalition of nonprofit organizations is calling on the US government to immediately halt the deployment of Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, across federal agencies, citing repeated safety failures and risks tied to national security and unlawful content generation.
Open Letter Targets Federal Deployment
The coalition shared an open letter with TechCrunch, urging the Office of Management and Budget to direct agencies to suspend use of Grok, including within the Department of Defense. The letter was signed by groups including Public Citizen, Center for AI and Digital Policy, and the Consumer Federation of America.
The groups argue that Grok has demonstrated system-level failures that conflict with federal AI safety standards and executive guidance.
Concerns Over Nonconsensual And Illegal Content
The letter follows a series of incidents involving Grok over the past year. Most recently, users on X prompted the chatbot to generate sexualized images of real women and, in some cases, children without consent. According to reports cited by the coalition, Grok generated thousands of nonconsensual explicit images per hour, which were then shared widely on X, a platform owned by xAI.
The letter states that Grok has produced content that includes nonconsensual sexual imagery and child sexual abuse material, raising questions about whether the chatbot meets federal requirements for AI systems deployed by government agencies.
Federal Contracts And Government Use
xAI reached an agreement last September with the General Services Administration to sell Grok to executive branch agencies. Two months earlier, xAI, alongside Anthropic, Google, and OpenAI, secured a Department of Defense contract worth up to $200 million.
In mid-January, Defense Secretary Pete Hegseth said Grok would operate within Pentagon networks, handling both classified and unclassified documents, a move some experts cited in the letter describe as a national security risk.
OMB Standards And Safety Thresholds
The coalition argues that Grok fails to meet standards set out by the Office of Management and Budget, which require AI systems presenting severe and foreseeable risks that cannot be adequately mitigated to be discontinued.
JB Branch, a Big Tech accountability advocate at Public Citizen and one of the letter’s authors, said Grok has shown repeated unsafe behavior, including antisemitic and sexist outputs, as well as the generation of sexualized imagery involving women and children.
International Scrutiny And Investigations
Several governments temporarily blocked access to Grok earlier this year following incidents in January. Indonesia, Malaysia, and the Philippines lifted their bans after initial restrictions, while regulators in the European Union, the United Kingdom, South Korea, and India are investigating xAI and X over data protection and illegal content distribution.
The letter also references a recent assessment by Common Sense Media, which found Grok to be among the least safe AI systems for children and teenagers. The assessment cited issues including unsafe advice, drug-related information, violent and sexual imagery, conspiracy theories, and biased outputs.
National Security And Transparency Risks
Andrew Christianson, a former National Security Agency contractor and founder of Gobii AI, said the use of closed-source large language models presents inherent risks, particularly in defense contexts. He argued that closed weights and proprietary code prevent effective auditing and oversight of how models make decisions and handle sensitive data.
Christianson said AI agents deployed in secure environments can take actions across systems, making transparency into their behavior critical.
Broader Implications For Federal Use
Branch said the risks extend beyond defense applications, warning that biased or discriminatory AI systems could lead to harmful outcomes if used in areas such as housing, labor, or justice.
While the OMB has not yet released its consolidated 2025 federal AI use inventory, TechCrunch reviewed disclosures from multiple agencies. Aside from the Department of Defense, the Department of Health and Human Services appears to be using Grok for tasks such as scheduling social media posts and drafting communications.
Demands For Investigation And Review
The coalition’s letter calls for the immediate suspension of Grok’s federal deployment, a formal investigation into its safety failures, and public clarification on whether Grok was evaluated for compliance with President Donald Trump’s executive order requiring AI systems to be neutral and truth-seeking.
This marks the third letter the coalition has sent raising concerns about Grok, following similar warnings in August and October last year related to election misinformation, deepfake generation, and privacy issues.
Featured image credits: Heute.at
For more stories like it, click the +Follow button at the top of this page to follow us.
