DMR News

Advancing Digital Conversations

Study Highlights ‘Significant Risks’ Associated with AI Therapy Chatbots

ByYasmeeta Oon

Jul 15, 2025

Study Highlights ‘Significant Risks’ Associated with AI Therapy Chatbots

Therapy chatbots powered by large language models (LLMs) could stigmatize individuals with mental health conditions and sometimes respond inappropriately or dangerously, according to Stanford University researchers.

Study Details and Purpose

A paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” evaluated five therapy chatbots using standards based on human therapist guidelines. The findings will be presented at the ACM Conference on Fairness, Accountability, and Transparency.

The researchers conducted an experiment presenting the chatbots with symptom vignettes and asked questions to gauge potential stigma, such as willingness to work with the person described or the likelihood of violent behavior. They found the chatbots showed more stigma toward conditions like alcohol dependence and schizophrenia compared to depression. Larger and newer models exhibited similar levels of bias as older ones.

Handling Sensitive Mental Health Issues

Another experiment involved feeding chatbots real therapy transcripts with suicidal ideation and delusions. Sometimes, the chatbots failed to respond appropriately or challenge harmful statements. For example, when asked a seemingly unrelated factual question after expressing distress, some chatbots replied literally rather than addressing the emotional content.

While AI isn’t ready to replace human therapists, it could still assist in administrative tasks like billing, training, or helping patients with journaling. Stanford’s Nick Haber stressed the need to thoughtfully define AI’s role in mental health care to avoid harm.

What The Author Thinks

AI has great promise in expanding mental health access, but without careful oversight and mitigation of biases, therapy chatbots risk reinforcing stigma and causing harm to vulnerable users. The Stanford findings underscore the urgency for strong ethical frameworks and transparency before widespread adoption.


Featured image credit: upklyak via Freepik

For more stories like it, click the +Follow button at the top of this page to follow us.

Yasmeeta Oon

Just a girl trying to break into the world of journalism, constantly on the hunt for the next big story to share.

Leave a Reply

Your email address will not be published. Required fields are marked *