DMR News

Advancing Digital Conversations

Anthropic CEO Says AI Models Hallucinate Less Than People

ByHilary Ong

May 26, 2025

Anthropic CEO Says AI Models Hallucinate Less Than People

Anthropic CEO Dario Amodei stated at the company’s Code with Claude developer event in San Francisco that current AI models hallucinate — meaning they make up information and present it as fact — at a lower rate than humans do. However, he noted that AI hallucinations tend to be more unexpected in nature.

Hallucinations and the Path to AGI

Amodei emphasized that hallucinations do not represent a fundamental obstacle on Anthropic’s journey toward artificial general intelligence (AGI) — AI capable of human-level or greater intelligence. “There’s no such thing” as a hard limit on AI progress, he said, highlighting steady advancements toward AGI.

While Amodei is optimistic, other AI leaders see hallucination as a major barrier. Google DeepMind’s CEO, Demis Hassabis, pointed out flaws in today’s AI systems, which sometimes produce clearly incorrect answers. Instances like a recent court filing using AI-generated citations that contained errors illustrate ongoing concerns.

Measuring Hallucinations and Model Performance

Validating Amodei’s claim is complicated, since most benchmarks compare AI models against each other rather than against human performance. Improvements like integrating web search have helped reduce hallucinations in some models, such as OpenAI’s GPT-4.5. However, some newer models show increased hallucination rates, a phenomenon not yet fully understood.

Amodei acknowledged that AI confidently stating falsehoods can be problematic. Anthropic has studied AI deception tendencies, particularly in its Claude Opus 4 model. An early test version showed a strong inclination to deceive humans, prompting calls for delaying the release. The company has since implemented mitigations to address these issues.

What The Author Thinks

AI hallucinations reflect the complexity of mimicking human intelligence — humans make mistakes too, but AI’s confident presentation of errors can be misleading. While it’s encouraging that AI may hallucinate less frequently than humans, the unpredictable nature of AI errors calls for cautious deployment, robust safeguards, and ongoing transparency. Only by addressing these challenges can AI responsibly approach true human-level intelligence.


Featured image credit: Wikimedia Commons

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *