OpenAI CEO Sam Altman has cautioned users about relying on ChatGPT for therapy or emotional support, warning that conversations with AI currently lack the legal privacy protections found in traditional doctor-patient or attorney-client relationships.
No Legal Confidentiality for AI Conversations Yet
Speaking on the podcast This Past Weekend w/ Theo Von, Altman explained the problem lies in the absence of a legal or policy framework protecting user chats with AI.
“People talk about the most personal stuff in their lives to ChatGPT,” Altman said. “Young people especially use it as a therapist, a life coach, or for relationship advice. But unlike with a human therapist or lawyer, there is no legal confidentiality for those conversations with AI.”
This lack of protection means that in legal cases, OpenAI might be compelled to hand over these conversations as evidence, exposing sensitive user data.
Altman acknowledged that the absence of such confidentiality could discourage broader adoption of AI tools for personal matters. He criticized the current state of affairs, calling for AI conversations to be granted the same privacy rights as those with human professionals.
OpenAI is currently appealing a court order that would require it to retain chat logs from hundreds of millions of ChatGPT users worldwide (excluding enterprise users), calling the mandate an “overreach.” Such demands could open the door for wider legal discovery and law enforcement requests.
Data Privacy in the Age of AI
The issue taps into broader concerns about digital privacy, especially as data is increasingly subpoenaed in criminal investigations or used in politically charged cases. For example, after the Supreme Court overturned Roe v. Wade, many users switched to privacy-focused health apps to safeguard sensitive data.
When asked about his own ChatGPT usage, Altman agreed with host Theo Von’s hesitance, emphasizing the need for clear legal protections before widespread personal use.
“I think it makes sense to want privacy clarity before you use [ChatGPT] a lot — like legal clarity,” Altman said.
What The Author Thinks
As AI becomes more integrated into daily life, protecting the privacy of deeply personal conversations is critical. Without legal frameworks to safeguard chats with AI from being disclosed, users may be reluctant to engage openly with these tools. Establishing clear, enforceable confidentiality laws for AI interactions isn’t just a technical issue — it’s a fundamental step toward responsible and ethical AI use that respects human dignity and privacy.
Featured image credit: Heute
For more stories like it, click the +Follow button at the top of this page to follow us.