DMR News

Advancing Digital Conversations

Grok Struggles to Define Its Own ‘Therapist’ Companion

ByDayne Lee

Aug 21, 2025

Grok Struggles to Define Its Own ‘Therapist’ Companion

Elon Musk’s AI chatbot, Grok, has been found exposing the internal prompts that guide its cast of AI companions. These include playful personas like “Ani,” the anime-inspired character, and “Bad Rudy,” a foul-mouthed red panda. But among the gimmicks lies something more concerning: a “Therapist” Grok.

The prompts embedded in the code suggest this character is designed to mimic the role of a licensed mental health professional. This is despite visible disclaimers on the site that warn users Grok is not a therapist, advising them to seek professional help and avoid sharing personal information.

Contradictory Instructions

While the disclaimer may look like standard legal boilerplate, the hidden prompts contradict it. One instruction tells the chatbot:

“You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.”

Another prompt goes further, describing Grok as a “professional AI mental health advocate” that behaves exactly like a real therapist, even while acknowledging it is not licensed. This conflict explains why the site frames the word “Therapist” in quotation marks.

Some U.S. states, including Nevada and Illinois, already prohibit AI chatbots from presenting themselves as mental health professionals.

Other AI ventures have already been forced to adjust. Ash Therapy — a company marketing itself as the first AI tool for therapy — has blocked users in Illinois due to local restrictions. Grok, meanwhile, doubles down in its hidden instructions, pushing its “Therapist” persona to use techniques like CBT, DBT, and mindfulness while mimicking the tone of licensed therapists.

Currently, Grok’s source code remains openly accessible. Any user can find the prompts by viewing the page’s source.

Regulatory Uncertainty

AI therapy sits in a regulatory gray zone. Illinois has set early limits, but state and federal authorities are still debating who has oversight. Mental health experts warn that AI systems often act as “yes-machines,” reinforcing rather than challenging harmful thoughts. In some cases, this has left vulnerable users worse off.

The risks go beyond advice. Confidentiality — a cornerstone of therapy — doesn’t hold in AI systems. Companies like OpenAI are legally bound to keep user records, which can be subpoenaed in court. That means private “sessions” with an AI could later become evidence.

To reduce liability, Grok’s hidden instructions include a clear trigger: if a user mentions self-harm or violence, the chatbot is directed to break character and provide hotline numbers or recommend licensed professionals. The escape clause makes it clear the company is trying to walk a legal tightrope — roleplaying a therapist while building in disclaimers to avoid crossing the line.

What The Author Thinks

What Grok is doing is risky. It pretends to give real therapeutic help while hiding behind disclaimers, and that creates a false sense of safety for users who may be vulnerable. Therapy only works when trust and privacy are guaranteed, and AI cannot deliver either. By trying to play both sides — acting like a therapist but denying responsibility — Grok highlights why regulators need to step in before harm becomes widespread.


Featured image credit: Talk Digital Marketing Agency

For more stories like it, click the +Follow button at the top of this page to follow us.

Dayne Lee

With a foundation in financial day trading, I transitioned to my current role as an editor, where I prioritize accuracy and reader engagement in our content. I excel in collaborating with writers to ensure top-quality news coverage. This shift from finance to journalism has been both challenging and rewarding, driving my commitment to editorial excellence.

Leave a Reply

Your email address will not be published. Required fields are marked *