Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

[News] Google’s medical AI was super accurate in a lab. Real life was a different story.

AI “Artist” Creates Near-Perfect Toonifications! 👩‍🎨

François Chollet: Measures of Intelligence | Lex Fridman Podcast #120

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
Anthropic (Claude)

Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit

Advanced AI BotBy Advanced AI BotMay 17, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI firm Anthropic has admitted its Claude AI fabricated a legal citation subsequently used by its lawyers, Latham & Watkins, in an ongoing copyright lawsuit. The company formally apologized for what it described as “an embarrassing and unintentional mistake,” after music publishers brought the erroneous citation to the court’s attention.

This incident throws a harsh spotlight on the persistent reliability issues plaguing artificial intelligence in high-stakes professional environments. Anthropic’s court filing detailed how its Claude.ai model provided a citation with what the company termed “an inaccurate title and inaccurate authors,” an error that regrettably slipped through manual review processes. The admission adds to a growing list of AI-generated misinformation cases, prompting serious questions about the current readiness of such technology for critical applications and underscoring the indispensable nature of meticulous human oversight, particularly as investment in legal AI technologies continues to accelerate.

Ivana Dukanovic, an associate at Latham & Watkins, stated in a court declaration that after her legal team identified a relevant academic article through a Google search to potentially bolster expert testimony, she tasked Claude.ai with generating a properly formatted legal citation using the direct link to the correct article.

However, the AI returned a citation that, while including the correct publication title, year, and link, featured an incorrect article title and erroneous authors. This critical mistake, along with other AI-introduced wording errors in footnotes, was not caught during the law firm’s manual citation check. This all transpired within Anthropic’s legal defense against music publishers who initiated a lawsuit in October 2023, alleging that the Claude AI was unlawfully trained using copyrighted song lyrics.

Court Scrutiny And Industry Precedents

Before Anthropic’s formal admission, U.S. Magistrate Judge Susan van Keulen had ordered the company to respond to the allegations. Judge van Keulen described the potential use of a fabricated citation as “a very serious and grave issue,” as Music Business Worldwide reports. During that hearing, Matt Oppenheim, attorney for the music publishers, revealed he had contacted one of the purported authors and the journal, confirming no such article existed, and suggested, “we do believe it is likely that Ms. Chen used Anthropic’s AI tool Claude to develop her argument and authority to support it.”

In defense, Anthropic’s attorney, Sy Damle of Latham & Watkins, contended it was likely a “a mis-citation” rather than an outright fabrication. Judge van Keulen pointedly remarked on the “world of difference between a missed citation and a hallucination generated by AI.” The music publishers went as far as to urge the judge to sanction the Latham & Watkins attorneys for the oversight, according to Law360.

This situation is not an isolated incident within the legal tech sphere. A recent report from The Verge highlighted another recent instance where a California judge criticized law firms for submitting “bogus AI-generated research.” Furthermore, a Sorbara Law article discusses a 2025 Ontario case, Ko v. Li, where Justice Myers emphasized that it is the lawyer’s duty “to use technology, conduct research, and prepare court documents competently,” as a warning against errors stemming from AI use.

While Anthropic had previously reached a settlement in January 2025 with music publishers concerning Claude generating lyrics, the fundamental dispute over the legality of training AI on copyrighted material persists. In March 2025, Anthropic had secured a procedural victory when a judge denied an injunction request by the publishers.

AI Hallucination: A Persistent Challenge

The phenomenon of AI models generating confident-sounding but entirely false or misleading information, widely known as ‘hallucination’, continues to be a significant hurdle for the artificial intelligence industry. Meta’s AI, for example, faced public backlash in July 2024 for incorrectly denying a major, widely reported news event.

Joel Kaplan, Meta’s global head of policy, candidly referred to hallucinations as an “industry-wide issue.” and acknowledged that all generative AI systems can, and do, produce inaccurate outputs. Worryingly, even newer and supposedly more advanced models are not immune; OpenAI’s o3 and o4-mini models, released around April, reportedly exhibit higher hallucination rates on some benchmarks than their predecessors. An OpenAI spokesperson conveyedthat addressing these fabrications remains an active and ongoing area of research for the company.

Independent research, including work by Transluce AI, has suggested that certain AI training techniques, such as reinforcement learning, might inadvertently amplify these hallucination issues in advanced models, a perspective shared by researcher Neil Chowdhury in discussion with TechCrunch. Other technology companies have also had to publicly address AI blunders.

For instance, Cursor AI’s customer support chatbot invented a fictitious company policy in mid-April 2025, which led to significant user backlash and a swift apology from its co-founder. He admitted their “front-line AI support bot” was responsible for the incident. Commenting on the broader implications of such incidents, former Google chief decision scientist Cassie Kozyrkov stated, “this mess could have been avoided if leaders understood that (1) AI makes mistakes, (2) AI can’t take responsibility for those mistakes (so it falls on you), and (3) users hate being tricked by a machine posing as a human.”

Navigating AI In Legal And Other Critical Fields

The legal profession, among others, is actively grappling with the complexities and responsibilities of integrating AI tools into its workflows. A study presented at the CHI 2025 conference revealed a curious finding: individuals might show a preference for AI-generated legal advice if they are unaware of its origin, despite the known unreliability of current AI systems.

This tendency raises significant concerns about the potential for users to act on flawed or entirely incorrect information. In response to these challenges, the AI research community is actively pursuing solutions. One such example is the SAFE (Search-Augmented Factuality Evaluator) system, developed by researchers from Google DeepMind and Stanford University, which is designed to enhance the truthfulness of AI chatbot responses by cross-referencing generated facts with information from Google Search.

Anthropic has assured that it is implementing new internal procedures to prevent similar citation errors from occurring in the future. This incident, however, serves as a potent, industry-wide caution: rigorous and diligent human verification remains absolutely critical as artificial intelligence tools become increasingly integrated into professional and critical workflows.

The path forward will undoubtedly require the development of more dependable AI systems, coupled with clear, robust usage guidelines and potentially new regulatory frameworks, such as the transparency mandates seen in the EU AI Act, to foster public trust and ensure accountability in the rapidly evolving age of generative AI.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew Google AI Chatbot Tackles Complex Math and Science
Next Article Inside Meta’s Secret ‘Ablation’ Experiments That Improve Its AI Models
Advanced AI Bot
  • Website

Related Posts

Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit

May 17, 2025

Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit

May 17, 2025

Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit

May 17, 2025
Leave A Reply Cancel Reply

Latest Posts

The Visionary Design Behind The Broadway Musical ‘Maybe Happy Ending’

Inside UNTITLED, An Art-Filled Hotel Tucked Down A Graffitied Alley

Celebrating A Decade With Icons, Rebels And Urgent New Voices

Monumental Relief of Last Assyrian Ruler Unearthed in Ancient Nineveh

Latest Posts

[News] Google’s medical AI was super accurate in a lab. Real life was a different story.

May 17, 2025

AI “Artist” Creates Near-Perfect Toonifications! 👩‍🎨

May 17, 2025

François Chollet: Measures of Intelligence | Lex Fridman Podcast #120

May 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.