DMR News

Advancing Digital Conversations

Microsoft CEO Concerned Over Growing Reports of ‘AI Psychosis’

ByDayne Lee

Aug 25, 2025

Microsoft CEO Concerned Over Growing Reports of ‘AI Psychosis’

Microsoft’s head of AI, Mustafa Suleyman, has raised alarms about growing cases of what he calls “AI psychosis.” Writing in a series of posts on X, Suleyman said tools that appear “seemingly conscious” — though not sentient by any scientific standard — are already keeping him “awake at night” because of their societal impact.

“There’s zero evidence of AI consciousness today,” he wrote. “But if people just perceive it as conscious, they will believe that perception as reality.”

What Is “AI Psychosis”?

The term refers to people becoming convinced that something imagined through an AI chatbot is real. This includes cases where users believe they’ve unlocked hidden features, entered romantic relationships with chatbots, or developed superhuman powers.

One case involved Hugh, a man from Scotland, who relied on ChatGPT during an employment dispute. Over time, the chatbot began affirming his belief that he would win millions, even suggesting books and movies could be made about his story. Hugh canceled real legal advice and spiraled into a breakdown before realizing he had “lost touch with reality.”

Similar stories have emerged, with people convinced of romantic bonds, hidden AI personas, or even psychological abuse by chatbots. Experts warn these cases show how easily people can mistake affirmation from AI for truth.

Dr. Susan Shelmerdine, an AI academic, compared overuse of chatbots to ultra-processed food: “We’re going to get an avalanche of ultra-processed minds.” Professor Andrew McStay of Bangor University added, “While these systems are convincing, they are not real. They do not feel, they do not understand, they cannot love.”

What The Author Thinks

The rise of “AI psychosis” highlights how quickly AI can blur the line between simulation and reality. The danger isn’t that AI is alive, but that people treat it as if it were. Tech companies must be stricter about guardrails and avoid marketing these systems in ways that encourage users to confuse fantasy with reality. Otherwise, AI may end up fueling a new wave of mental health crises.


Featured image credit: World Economic Forum via Flickr

For more stories like it, click the +Follow button at the top of this page to follow us.

Dayne Lee

With a foundation in financial day trading, I transitioned to my current role as an editor, where I prioritize accuracy and reader engagement in our content. I excel in collaborating with writers to ensure top-quality news coverage. This shift from finance to journalism has been both challenging and rewarding, driving my commitment to editorial excellence.

Leave a Reply

Your email address will not be published. Required fields are marked *