DMR News

Advancing Digital Conversations

ChatGPT Faces Legal Scrutiny Over False Claims in Privacy Complaint

ByHilary Ong

Mar 24, 2025

ChatGPT Faces Legal Scrutiny Over False Claims in Privacy Complaint

OpenAI is facing a fresh privacy complaint in Europe regarding its AI chatbot, ChatGPT, which has been criticized for generating false information. The privacy rights advocacy group Noyb is backing a Norwegian individual who discovered that ChatGPT falsely claimed he had been convicted of murder.

False Information Sparks Privacy Concerns

The complaint stems from an incident where ChatGPT generated a fictitious narrative about the individual, claiming that he had been convicted of murdering his children. This incident highlights ongoing concerns about the accuracy of AI-generated data and the potential reputational damage it can cause to individuals. Noyb’s complaint emphasizes that under the European Union’s General Data Protection Regulation (GDPR), personal data must be accurate, and users should have the right to correct any false information generated by AI tools like ChatGPT.

Despite disclaimers from OpenAI stating that ChatGPT can make mistakes, Noyb argues that this is insufficient. The advocacy group claims that AI developers should be held accountable for generating damaging falsehoods, especially when they are not providing mechanisms to rectify these errors. In previous cases, ChatGPT has wrongly accused individuals of being involved in crimes, such as bribery or child abuse.

GDPR Violations and Consequences

Under the GDPR, companies like OpenAI could face significant penalties for mishandling personal data. Noyb is pushing for stronger enforcement, citing Italy’s data protection authority’s intervention in 2023, which temporarily blocked ChatGPT due to similar concerns. In light of these issues, privacy regulators in Europe are examining how best to apply GDPR to AI tools.

OpenAI’s recent AI update has improved accuracy, especially in relation to the issue raised in the complaint. However, privacy concerns remain, and both Noyb and the complainant are concerned that false information about individuals could still be embedded within the AI’s model. With regulatory bodies still in the process of investigating these concerns, the future of AI privacy regulation remains uncertain.

Author’s Opinion

While OpenAI’s improvements in accuracy are a step in the right direction, this ongoing issue with hallucinated information underscores the urgent need for more robust AI regulation. Companies must be held accountable for the consequences of their AI tools and provide clearer methods for users to rectify harmful data generated by these systems.


Featured image credit: FMT

Follow us for more breaking news on DMR

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *