Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

From Data Collection to Knowledge Creation by Multi-Agent Integration

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

MIT CSAIL Director Daniela Rus Presents New Self-Driving Models

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Judge Strikes Part of Anthropic (Claude.AI) Expert’s Declaration, Because of Uncaught AI Hallucination in Part of Citation
Anthropic (Claude)

Judge Strikes Part of Anthropic (Claude.AI) Expert’s Declaration, Because of Uncaught AI Hallucination in Part of Citation

Advanced AI BotBy Advanced AI BotMay 26, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


From Friday’s order by Magistrate Judge Susan van Keulen in Concord Music Group, Inc. v. Anthropic PBC (N.D. Cal.)

At the outset, the Court notes that during the hearing, Publishers asked this Court to examine Anthropic’s expert, Ms. Chen and strike her declaration because at least one of the citations therein appeared to have been an “AI hallucination”: a citation to an article that did not exist and whose purported authors had never worked together. The Court gave Anthropic time to investigate the circumstances surrounding the challenged citation. Having considered the declaration of Anthropic’s counsel and Publishers’ response, the Court finds this issue is a serious one—if not quite so grave as it at first appeared.

Anthropic’s counsel protests that this was “an honest citation mistake” but admits that Claude.ai was used to “properly format” at least three citations and, in doing so, generated a fictitious article name with inaccurate authors (who have never worked together) for the citation at issue. That is a plain and simple AI hallucination. Yet the underlying article exists, was properly linked to and was located by a human being using Google search; so, this is not a case where “attorneys and experts [have] abdicate[d] their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers….”

A remaining serious concern, however, is Anthropic’s attestation that a “manual citation check” was performed but “did not catch th[e] error.” It is not clear how such an error—including a complete change in article title—could have escaped correction during manual cite-check by a human being. Furthermore, although the undersigned’s  [i.e., the Magistrate Judge’s] standing order does not expressly address the use of AI by parties or counsel, Section VIII.G of [District] Judge Lee’s Civil Standing Order requires a certification “that lead trial counsel has personally verified the content’s accuracy.” Neither the certification nor  verification has occurred here. In sum, the Court STRIKES-IN-PART Ms. Chen’s declaration, striking paragraph 9 [which contains the footnote that contains the citation with the hallucination], and notes for the record that this issue undermines the overall credibility of Ms. Chen’s written declaration, a factor in the Court’s conclusion.

Thanks to ChatGPT Is Eating the World for the pointer; it also discusses more about the substantive role of paragraph 9 in the declaration. Here’s more backstory (from an earlier post):

The Declaration filed by a “Data Scientist at Anthropic” in Concord Music Group, Inc. v. Anthropic PBC includes this citation:

But the cited article doesn’t seem to exist at that citation or at that URL, and Google found no other references to any article by that title….

Here’s the explanation, from one of Anthropic’s lawyers (emphasis added):

Our investigation of the matter confirms that this was an honest citation mistake and not a fabrication of authority. The first citation in footnote 3 of Dkts. 340-3 (sealed) and 341-2 (public) includes an erroneous author and title, while providing a correct link to, and correctly identifying the publication, volume, page numbers, and year of publication of, the article referenced by Ms. Chen as part of the basis for her statement in paragraph 9. We apologize for the inaccuracy and any confusion this error caused.

The American Statistician article reviewed and relied upon by Ms. Chen [the Anthropic expert], and accessible at the first link provided in footnote 3 of Dkts. 340-3 and 341-2, is titled Binomial Confidence Intervals for Rare Events: Importance of Defining Margin of Error Relative to Magnitude of Proportion, by Owen McGrath and Kevin Burke. A Latham & Watkins associate located that article as potential additional support for Ms. Chen’s testimony using a Google search. The article exists and supports Ms. Chen’s testimony in her declaration and at the May 13, 2025 hearing, which she proffered based on her pre-existing knowledge regarding the appropriate relative margin of error for rare events. A copy of the complete article is attached as Exhibit A.

Specifically, “in the context of small or rare-event success probabilities,” the authors “suggest restricting the range of values to εR ∈ [0.1, 0.5]”—meaning, a relative margin of error between 10% to 50%—”as higher values lead to imprecision and poor interval coverage, whereas lower values lead to sample sizes that are likely to be impractically large for many studies.” See Exhibit A, at 446. This recommendation is entirely consistent with Ms. Chen’s testimony, which proposes using a 25% relative margin of error based on her expertise.

After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai. These wording errors are: (1) that the correct title of the source in footnote 2 of Ms. Chen’s declaration is Computing Necessary Sample Size, not, as listed in footnote 2, Sample Size Estimation, and (2) the author/preparer of the third source cited in footnote 3 is “Windward Environmental LLC”, not “Lower Windward Environmental LLC.” Again, we apologize for these citation errors.

Ms. Chen, as well as counsel, reviewed the complete text of Ms. Chen’s testimony and also reviewed each of the cited references prior to submitting Ms. Chen’s declaration to the Court. In reviewing her declaration both prior to submission and in preparation for the hearing on May 13, 2025, Ms. Chen reviewed the actual article available at the first link in footnote 3 of her declaration and attached hereto as Exhibit A, and the article supports the proposition expressed in her declaration with respect to the appropriate margin of error.

During the production and cite-checking process for Ms. Chen’s declaration, the Latham & Watkins team reviewing and editing the declaration checked that the substance of the cited document supported the proposition in the declaration, and also corrected the volume and page numbers in the citation, but did not notice the incorrect title and authors, despite clicking on the link provided in the footnote and reviewing the article. The Latham & Watkins team also did not notice the additional wording errors in footnotes 2 and 3 of Ms. Chen’s declaration, as described above in paragraph 6.

This was an embarrassing and unintentional mistake. The article in question genuinely exists, was reviewed by Ms. Chen and supports her opinion on the proper margin of error to use for sampling. The insinuation that Ms. Chen’s opinion was influenced by false or fabricated information is thus incorrect. As is the insinuation that Ms. Chen lacks support for her opinion. Moreover, the link provided both to this Court and to Plaintiffs was accurate and, when pasted into a browser, calls up the correct article upon which Ms. Chen had relied. Had Plaintiffs’ counsel raised the citation issue when they first discovered it, we could and would have confirmed that the article cited was the one upon which Ms. Chen relied and corrected the citation mistake.

We have implemented procedures, including multiple levels of additional review, to work to ensure that this does not occur again and have preserved, at the Court’s direction, all information related to Ms. Chen’s declaration. I understand that Anthropic has also preserved all information related to Ms. Chen’s declaration as well….



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI Will Disrupt Jobs Within 5 Years—Start Preparing Now
Next Article Meta Loses Majority of Original Llama AI Team to Competitors
Advanced AI Bot
  • Website

Related Posts

Anthropic’s Claude AI Chatbot Gets Voice Mode Feature, Offers Real-Time Two-Way Conversations

May 28, 2025

Claude AI adds a genuinely useful voice mode to its mobile app that can look inside your inbox and calendar

May 28, 2025

Anthropic’s Claude AI Chatbot Gets Voice Mode Feature, Offers Real-Time Two-Way Conversations

May 28, 2025
Leave A Reply Cancel Reply

Latest Posts

50 Years Of L.A. Louver in Venice, California: A History

From South Side to St. Peter’s Pope Leo XIV Gets a Hometown Tribute

38 New Museum Shows and Biennials to See This Summer

“I Practice Drawing Blindfolded”: Meet Sculptor Joanna Allen

Latest Posts

From Data Collection to Knowledge Creation by Multi-Agent Integration

May 28, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

May 28, 2025

MIT CSAIL Director Daniela Rus Presents New Self-Driving Models

May 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.