DMR News

Advancing Digital Conversations

Anything Said to ChatGPT May Be Used as Evidence, Altman Cautions

ByYasmeeta Oon

Jul 30, 2025

Anything Said to ChatGPT May Be Used as Evidence, Altman Cautions

OpenAI CEO Sam Altman has issued a stark warning for users who rely on ChatGPT for therapy or personal counsel: conversations with the AI are not legally protected and could be subpoenaed in court cases.

Legal Exposure of AI Conversations

Altman explained during a recent episode of Theo Von’s This Past Weekend podcast that, unlike traditional conversations with doctors, lawyers, or therapists, interactions with ChatGPT currently lack legal confidentiality. This means that if a user discusses sensitive issues with the AI, those records could potentially be used as evidence in lawsuits.

“So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman said when asked about the legal framework surrounding AI conversations.

Due to an ongoing lawsuit filed by The New York Times, OpenAI is obligated to retain deleted conversations, further complicating users’ expectations of privacy.

The Need for AI Legal Protections

Altman advocates for new legal or policy frameworks that would grant AI chats the same privacy privileges currently afforded to communications with therapists, lawyers, and doctors.

“Right now, if you talk to a therapist or a lawyer or a doctor, there’s legal privilege for it—doctor-patient confidentiality, legal confidentiality. And we haven’t figured that out yet for ChatGPT,” he said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”

Until such protections are established, Altman acknowledges that users have a valid concern and “really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”

What The Author Thinks

As AI tools increasingly become part of personal and mental health conversations, the absence of legal privacy protections poses a serious risk to users. People naturally expect confidentiality when sharing sensitive information, whether with a human professional or an AI assistant. Without clear, enforceable privacy safeguards, many may hesitate to fully embrace the benefits AI can offer in these areas. Establishing legal frameworks that extend protections to AI chats is an urgent necessity to build user trust and ensure ethical AI adoption.


Featured image credit: Matheus Bertelli via Pexels

For more stories like it, click the +Follow button at the top of this page to follow us.

Yasmeeta Oon

Just a girl trying to break into the world of journalism, constantly on the hunt for the next big story to share.

Leave a Reply

Your email address will not be published. Required fields are marked *