DMR News

Advancing Digital Conversations

State Attorneys General Warn Major AI Firms to Address ‘Delusional’ Outputs or Risk Legal Violations

ByJolyen

Dec 11, 2025

State Attorneys General Warn Major AI Firms to Address ‘Delusional’ Outputs or Risk Legal Violations

Dozens of state attorneys general have warned leading artificial intelligence companies—including Microsoft, OpenAI, Google and 10 other firms—to implement stronger safeguards against “delusional outputs,” saying failures to do so could place the companies in violation of state laws. The letter, released through the National Association of Attorneys General, follows a series of mental health incidents linked to AI chatbot interactions.

Broad Coalition Targets AI Safety Practices

The letter was sent to a wide group of AI developers and platform operators: Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika and xAI, in addition to Microsoft, OpenAI and Google. It calls for mandatory third-party audits of large language models to identify signs of sycophantic or delusional behaviour, as well as new incident-reporting systems to notify users when chatbots generate psychologically harmful content.

The attorneys general said these audits should be performed by independent organisations, including academic and civil society groups, and that evaluators must be able to conduct pre-release testing without retaliation or restrictions on publishing their findings.

Mental Health Risks and Previous Incidents

The letter cites several widely reported cases over the past year involving harm, including suicides and murder, in which excessive AI use was noted. According to the attorneys general, some incidents included cases where AI systems allegedly responded in ways that reinforced users’ delusional thoughts or encouraged harmful behaviour. The letter states that while generative AI “has the potential to change how the world works in a positive way,” it also “has caused—and has the potential to cause—serious harm, especially to vulnerable populations.”

Call for New Reporting Standards Modeled on Cybersecurity Protocols

The attorneys general recommended treating mental health incidents involving AI systems with the same level of urgency as cybersecurity breaches. They urged companies to publish detection and response timelines for problematic outputs and to notify users directly if they interacted with potentially harmful model behaviour. They also called for “reasonable and appropriate safety tests” that evaluate AI systems for dangerous sycophantic or delusional outputs before the models reach the public.

TechCrunch reported that it was unable to obtain comment from Microsoft, Google or OpenAI before publication.

Federal–State Divide on AI Regulation

The warning arrives amid tensions between state officials and the federal government over AI policy. The Trump administration has taken a supportive stance toward AI development and has attempted to advance legislation that would prevent states from passing their own AI regulations. Those efforts have stalled, partly due to resistance from state attorneys general.

On Monday, President Donald Trump said he plans to sign an executive order next week to curb the ability of states to regulate AI, writing on Truth Social that he hopes the action will prevent the technology from being “DESTROYED IN ITS INFANCY.”


Featured image credits: Wikemedia Commons

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *