OpenAI’s ChatGPT has not yet met the European Union’s data accuracy standards, according to a recent assessment by a task force of the EU’s privacy watchdog. Despite efforts by OpenAI to enhance the factual reliability of ChatGPT’s outputs, the task force asserts that these measures are insufficient to satisfy the comprehensive requirements of EU data protection laws.
The task force, a collective body uniting Europe’s national privacy watchdogs, was established in response to concerns initiated by national regulators, particularly Italy’s data protection authority. These concerns, raised last year, prompted a focused investigation into ChatGPT’s compliance with EU regulations.
The task force’s findings, detailed in a report released on its website this past Friday, reveal ongoing challenges in ensuring that the AI’s output adheres to the EU’s principle of data accuracy.
According to the task force, while OpenAI has made significant strides in adhering to the transparency principle—which helps prevent misinterpretations of ChatGPT’s outputs—the measures still fall short of fully complying with the data accuracy principle. This principle is crucial, as it mandates that data handled by entities within the EU must be accurate and, where necessary, kept up to date.
The nature of ChatGPT’s training and function, which is inherently probabilistic, predisposes the model to produce outputs that might be biased or entirely fabricated. This characteristic poses a substantial barrier to compliance with the data accuracy requirement.
The task force also highlighted a significant concern regarding the perception of ChatGPT’s outputs: end users are likely to take the information provided by the AI as factually accurate, regardless of its actual veracity. This issue is particularly problematic when the outputs include information related to individuals.
Moreover, the task force noted that investigations by national privacy watchdogs in several EU member states are still ongoing. Consequently, a complete description of the results of these investigations is not yet possible, and the findings presented should be seen as an initial common denominator among the national authorities.
As of now, OpenAI has not publicly responded to these findings.
Related News:
Featured Image courtesy of Leon Neal/Getty Images