Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

A Chart-Metadata Generation Framework for Multi-Task Chart Understanding

MIT CSAIL researchers develop tool for creating domain-specific languages

IBM Planning Analytics drives better performance for SA business

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » AI ‘hallucinations’ are a growing problem for the legal profession
Finance AI

AI ‘hallucinations’ are a growing problem for the legal profession

Advanced AI BotBy Advanced AI BotMay 22, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


You’ve probably heard the one about the product that blows up in its creators’ faces when they’re trying to demonstrate how great it is.

Here’s a ripped-from-the-headlines yarn about what happened when a big law firm used an AI bot product developed by Anthropic, its client, to help write an expert’s testimony defending the client.

It didn’t go well. Anthropic’s chatbot, Claude, got the title and authors of one paper cited in the expert’s statement wrong, and injected wording errors elsewhere. The errors were incorporated in the statement when it was filed in court in April.

I can’t believe people haven’t yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed.

Eugene Volokh, UCLA law school

Those errors were enough to prompt the plaintiffs suing Anthropic — music publishers who allege that the AI firm is infringing their copyrights by feeding lyrics into Claude to “train” the bot — to ask the federal magistrate overseeing the case to throw out the expert’s testimony in its entirety.

It may also become a black eye for the big law firm Latham & Watkins, which represents Anthropic and submitted the errant declaration.

Latham argues that the errors were inconsequential, amounting to an “honest citation mistake and not a fabrication.” The firm’s failure to notice the errors before the statement was filed is “an embarrassing and unintentional mistake,” but it shouldn’t be exploited to invalidate the expert’s opinion, the firm told Magistrate Judge Susan van Keulen of San Jose, who is managing the pretrial phase of the lawsuit. The plaintiffs, however, say the errors “fatally undermine the reliability” of the expert’s declaration.

At a May 13 hearing conducted by phone, van Keulen herself expressed doubts.

“There is a world of difference between a missed citation and a hallucination generated by AI, and everyone on this call knows that,” she said, according to a transcript of the hearing cited by the plaintiffs. (Van Keulen hasn’t yet ruled on whether to keep the expert’s declaration in the record or whether to hit the law firm with sanctions.)

That’s the issue confronting judges as courthouse filings peppered with serious errors and even outright fabrications — what AI experts term “hallucinations” — continue to be submitted in lawsuits.

A roster compiled by the French lawyer and data expert Damien Charlotin now numbers 99 cases from federal courts in two dozen states as well as from courts in Europe, Israel, Australia, Canada and South Africa.

That’s almost certainly an undercount, Charlotin says. The number of cases in which AI-generated errors have gone undetected is incalculable, he says: “I can only cover cases where people got caught.”

Read more: Hiltzik: This artificial intelligence chatbot turns out to be a plagiarist — and an idiot

In nearly half the cases, the guilty parties are pro-se litigants — that is, people pursuing a case without a lawyer. Those litigants generally have been treated leniently by judges who recognize their inexperience; they seldom are fined, though their cases may be dismissed.

In most of the cases, however, the responsible parties were lawyers. Amazingly, in some 30 cases involving lawyers the AI-generated errors were discovered or were in documents filed as recently as this year, long after the tendency of AI bots to “hallucinate” became evident. That suggests that the problem is getting worse, not better.

“I can’t believe people haven’t yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed,” says UCLA law professor Eugene Volokh.

Judges have been making it clear that they have had it up to here with fabricated quotes, incorrect references to legal decisions and citations to nonexistent precedents generated by AI bots. Submitting a brief or other document without certifying the truth of its factual assertions, including citations to other cases or court decisions, is a violation of Rule 11 of the Federal Rules of Civil Procedure, which renders lawyers vulnerable to monetary sanctions or disciplinary actions.

Some courts have issued standing orders that the use of AI at any point in the preparation of a filing must be disclosed, along with a certification that every reference in the document has been verified. At least one federal judicial district has forbidden almost any use of AI.

The proliferation of faulty references in court filings also points to the most serious problem with the spread of AI bots into our daily lives: They can’t be trusted. Long ago it became evident that when even the most sophisticated AI systems are flummoxed by a question or task, they fill in the blanks in their own knowledge by making things up.

Read more: Hiltzik: Artificial intelligence chatbots are spreading fast, but hype about them is spreading faster

As other fields use AI bots to perform important tasks, the consequences can be dire. Many medical patients “can be led astray by hallucinations,” a team of Stanford researchers wrote last year. Even the most advanced bots, they found, couldn’t back up their medical assertions with solid sources 30% of the time.

It’s fair to say that workers in almost any occupation can fall victim to weariness or inattention; but attorneys often deal with disputes with thousands or millions of dollars at stake, and they’re expected to be especially rigorous about fact-checking formal submissions.

Some legal experts say there’s a legitimate role for AI in the law — even to make decisions customarily left to judges. But lawyers can hardly be unaware of the pitfalls for their own profession in failing to monitor bots’ outputs.

The very first sanctions case on Charlotin’s list originated in June 2023 — Mata vs. Avianca, a New York personal injury case that resulted in a $5,000 penalty for two lawyers who prepared and submitted a legal brief that was largely the product of the ChatGPT chatbot. The brief cited at least nine court decisions that were soon exposed as nonexistent. The case was widely publicized coast to coast.

One would think fiascos like this would cure lawyers of their reliance on artificial intelligence chatbots to do their work for them. One would be wrong. Charlotin believes that the superficially authentic tone of AI bots’ output may encourage overworked or inattentive lawyers to accept bogus citations without double-checking.

“AI is very good at looking good,” he told me. Legal citations follow a standardized format, so “they’re easy to mimic in fake citations,” he says.

It may also be true that the sanctions in the earliest cases, which generally amounted to no more than a few thousand dollars, were insufficient to capture the bar’s attention. But Volokh believes the financial consequences of filing bogus citations should pale next to the nonmonetary consequences.

“The main sanctions to each lawyer are the humiliation in front of the judge, in front of the client, in front of supervisors or partners…, possibly in front of opposing counsel, and, if the case hits the news, in front of prospective future clients, other lawyers, etc.,” he told me. “Bad for business and bad for the ego.”

Charlotin’s dataset makes for amusing reading — if mortifying for the lawyers involved. It’s peopled by lawyers who appear to be totally oblivious to the technological world they live in.

The lawyer who prepared the hallucinatory ChatGPT filing in the Avianca case, Steven A. Schwartz, later testified that he was “operating under the false perception that this website could not possibly be fabricating cases on its own.” When he began to suspect that the cases couldn’t be found in legal databases because they were fake, he sought reassurance — from ChatGPT!

Read more: Hiltzik: Excited about AI and self-driving cars? A top roboticist is here to burst your bubble

“Is Varghese a real case?” he texted the bot. Yes, it’s “a real case,” the bot replied. Schwartz didn’t respond to my request for comment.

Other cases underscore the perils of placing one’s trust in AI.

For example, last year Keith Ellison, the attorney general of Minnesota, hired Jeff Hancock, a communications professor at Stanford, to provide an expert opinion on the danger of AI-faked material in politics. Ellison was defending a state law that made the distribution of such material in political campaigns a crime; the law was challenged in a lawsuit as an infringement of free speech.

Hancock, a well-respected expert in the social harms of AI-generated deepfakes — photos, videos and recordings that seem to be the real thing but are convincingly fabricated — submitted a declaration that Ellison duly filed in court.

But Hancock’s declaration included three hallucinated references apparently generated by ChatGPT, the AI bot he had consulted while writing it. One attributed to bogus authors an article he himself had written, but he didn’t catch the mistake until it was pointed out by the plaintiffs.

Laura M. Provinzino, the federal judge in the case, was struck by what she called “the irony” of the episode: “Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less.”

That provoked her to anger. Hancock’s fake citations, she wrote, “shatters his credibility with this Court.” Noting that he had attested to the veracity of his declaration under penalty of perjury, she threw out his entire expert declaration and refused to allow Ellison to file a corrected version.

In a mea culpa statement to the court, Hancock explained that the errors might have crept into his declaration when he cut-and-pasted a note to himself. But he maintained that the points he made in his declaration were valid nevertheless. He didn’t respond to my request for further comment.

On Feb. 6, Michael R. Wilner, a former federal magistrate serving as a special master in a California federal case against State Farm Insurance, hit the two law firms representing the plaintiff with $31,000 in sanctions for submitting a brief with “numerous false, inaccurate, and misleading legal citations and quotations.”

In that case, a lawyer had prepared an outline of the brief for the associates assigned to write it. He had used an AI bot to help write the outline, but didn’t warn the associates of the bot’s role. Consequently, they treated the citations in the outline as genuine and didn’t bother to double-check them.

As it happened, Wilner noted,”approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way.” He chose not to sanction the individual lawyers: “This was a collective debacle,” he wrote.

Wilner added that when he read the brief, the citations almost persuaded him that the plaintiff’s case was sound — until he looked up the cases and discovered they were bogus. “That’s scary,” he wrote. His monetary sanction for misusing AI appears to be the largest in a U.S. court … so far.

Get the latest from Michael Hiltzik
Commentary on economics and more from a Pulitzer Prize winner.
Sign me up.

This story originally appeared in Los Angeles Times.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDystopian Google AI Video Generator Generating Fake Fortnite Clips, and It’s Tough to Tell the Difference
Next Article Google DeepMind launches Gemini Diffusion, a smarter & faster AI model
Advanced AI Bot
  • Website

Related Posts

Microsoft fires employee who interrupted CEO’s speech to protest AI tech for Israeli military

May 22, 2025

Judge considers sanctions against attorneys in prison case for using AI in court filings

May 21, 2025

A newspaper’s summer book list recommends nonexistent books. Blame AI

May 21, 2025
Leave A Reply Cancel Reply

Latest Posts

Google’s AI Passed The ‘Will Smith Eating Spaghetti’ Test

‘All The Beauty In The World’ Author To Write Again After Acting Stint

‘Summer Of 69’ Star Sam Morelos Talks Acting And Being Asian American

Artist Jennifer Elster Navigates Dystopia With David Bowie, Trent Reznor, Chloe Sevigny, Wu-Tang, And Sonic Youth

Latest Posts

A Chart-Metadata Generation Framework for Multi-Task Chart Understanding

May 23, 2025

MIT CSAIL researchers develop tool for creating domain-specific languages

May 23, 2025

IBM Planning Analytics drives better performance for SA business

May 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.