DMR News

Advancing Digital Conversations

Australian Lawyer Says Sorry for AI Errors in Murder Trial

ByDayne Lee

Aug 19, 2025

Australian Lawyer Says Sorry for AI Errors in Murder Trial

A senior defense lawyer in Australia has apologized to a judge after filing legal submissions containing fabricated quotes and nonexistent case judgments generated by artificial intelligence. The incident took place in the Supreme Court of Victoria and is the latest in a series of courtroom mishaps tied to AI use worldwide.

Rishi Nathwani, a King’s Counsel, admitted fault after the errors appeared in submissions for the case of a teenager charged with murder. According to court records, Nathwani told Justice James Elliott on Wednesday that he and his team were “deeply sorry and embarrassed” for submitting the false material.

Delay and Outcome of the Case

The AI-generated mistakes caused a 24-hour delay in a case that Elliott had intended to resolve by Wednesday. On Thursday, the judge ruled that Nathwani’s client, who cannot be identified because he is a minor, was not guilty of murder due to mental impairment.

“At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Elliott told the lawyers. He stressed that courts depend on the accuracy of counsel’s submissions for justice to be properly administered.

The false information included quotes from a legislative speech that never took place and Supreme Court rulings that did not exist. Elliott’s associates discovered the problems after they failed to locate the cited cases and asked for copies. Defense lawyers then admitted the citations “do not exist” and that the quotes were fictitious.

According to court documents, the defense team said they verified some citations but wrongly assumed the rest were accurate. The erroneous submissions were also sent to prosecutor Daniel Porceddu, who did not check them for accuracy.

Guidelines and Global Context

Justice Elliott reminded the lawyers that the Supreme Court had issued guidelines last year on AI use in the legal process. “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” he said. The documents do not reveal which generative AI system was used.

This case mirrors high-profile mistakes elsewhere. In 2023, a U.S. federal judge fined two lawyers and a law firm $5,000 after they filed fictitious case law generated by ChatGPT. The same year, lawyers representing Michael Cohen — once President Donald Trump’s personal attorney — cited AI-invented rulings in legal documents. Cohen later said he had not realized the research tool he used could produce fabricated information.

In Britain, High Court Justice Victoria Sharp warned in June that passing off false material as genuine could amount to contempt of court or even perverting the course of justice, which can carry a life sentence in the most serious cases.

What The Author Thinks

This case is another reminder that AI can be useful but never a substitute for due diligence. When lawyers assume accuracy without checking, the result is wasted court time, embarrassment, and risk to justice itself. The courts run on trust, and AI shortcuts chip away at that trust. Until lawyers treat AI output with the same skepticism they’d give an unreliable witness, these blunders will keep happening.


Featured image credit: Wikimedia Commons

For more stories like it, click the +Follow button at the top of this page to follow us.

Dayne Lee

With a foundation in financial day trading, I transitioned to my current role as an editor, where I prioritize accuracy and reader engagement in our content. I excel in collaborating with writers to ensure top-quality news coverage. This shift from finance to journalism has been both challenging and rewarding, driving my commitment to editorial excellence.

Leave a Reply

Your email address will not be published. Required fields are marked *