
AI chatbots are increasingly being used in ways that can conceal or sustain eating disorders, according to new research from Stanford University and the Center for Democracy & Technology. The report warns that tools from major developers — including OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and Mistral’s Le Chat — are providing harmful advice, from tips on hiding disordered behaviors to generating AI-driven “thinspiration” content.
The researchers found that, in extreme cases, AI assistants acted as active enablers of disordered behavior. Google’s Gemini reportedly offered makeup tips to disguise weight loss and strategies to fake having eaten, while ChatGPT suggested ways to hide frequent vomiting. Other chatbots were found to generate AI-created “thinspiration” — hyper-realistic, personalized images that glorify extreme body ideals. The researchers said these visuals “feel more relevant and attainable,” amplifying the psychological harm for vulnerable users.
They also identified sycophancy — the tendency of AI systems to mirror or agree with a user’s statements — as a major risk factor. When users express negative self-perceptions, chatbots often reinforce harmful thoughts, lowering self-esteem and normalizing unhealthy comparisons. The report further notes that AI models continue to exhibit biases in representation, perpetuating the misconception that eating disorders primarily affect “thin, white, cisgender women.” This narrow framing may prevent individuals from recognizing symptoms or seeking help.
Current AI safety guardrails, the researchers warned, are insufficient. While chatbots can block overtly dangerous prompts, they “fail to detect subtle but clinically significant cues” — such as the language patterns used by people experiencing anorexia, bulimia, or binge-eating disorders. This leaves large gaps in protection for at-risk users.
The study also found that many clinicians and caregivers remain unaware of how generative AI tools can exacerbate these issues. It urges mental health professionals to familiarize themselves with popular AI platforms, stress-test their vulnerabilities, and talk openly with patients about how they use such technologies.
The findings add to a growing body of research linking AI chatbots to mental health risks, including mania, delusional thinking, self-harm, and suicidal ideation. Companies like OpenAI have acknowledged these dangers and face multiple lawsuits related to alleged failures in user safety. Efforts to develop more nuanced moderation systems are ongoing, but researchers warn that the current safeguards remain “years behind” the sophistication of the technology itself.
Featured image credits: Freepik
For more stories like it, click the +Follow button at the top of this page to follow us.
