Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Discovering and using Spelke segments

Paper page – Iwin Transformer: Hierarchical Vision Transformer using Interleaved Windows

The Release Of DeepSeek Was A Win For America, Says NVIDIA CEO Jensen Huang

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Jack Clark

Leaked Surge AI List Shows Which Sites Shaped Anthropic’s AI

By Advanced AI EditorJuly 23, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


An internal spreadsheet obtained by Business Insider shows which websites Surge AI gig workers were told to mine — and which to avoid — while fine-tuning Anthropic’s AI to make it sound more “helpful, honest, and harmless.”

The spreadsheet allows sources like Bloomberg, Harvard University, and the New England Journal of Medicine while blacklisting others like The New York Times and Reddit.

Anthropic says it wasn’t aware of the spreadsheet and said it was created by a third-party vendor, the data-labeling startup Surge AI, which declined to comment on this point.

“This document was created by a third-party vendor without our involvement,” an Anthropic spokesperson said. “We were unaware of its existence until today and cannot validate the contents of the specific document since we had no role in its creation.”

Frontier AI companies mine the internet for content and often work with startups with thousands of human contractors, like Surge, to refine their AI models.

In this case, project documents show Surge worked to make Anthropic’s AI sound more human, avoid “offensive” statements, and cite documents more accurately.

Many of the whitelisted sources copyright or otherwise restrict their content. The Mayo Clinic, Cornell University, and Morningstar, whose main websites were all listed as “sites you can use,” told BI they don’t have any agreements with Anthropic to use this data for training AI models.

Surge left a trove of materials detailing its work for Anthropic, including the spreadsheet, accessible to anyone with the link on Google Drive. Surge locked down the documents shortly after BI reached out for comment.

“We take data security seriously, and documents are restricted by project and access level where possible,” a Surge spokesperson said. “We are looking closely into the matter to ensure all materials are protected.”

It’s the latest incident in which a data-labeling startup used public Google Docs to pass around sensitive AI training instructions. Surge’s competitor, Scale AI, also exposed internal data in this manner, locking the documents down after BI revealed the issue.

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

A Google Cloud spokesperson told BI that its default setting restricts a company’s files from sharing outside the organization; changing this setting is a “choice that a customer explicitly makes,” the spokesperson said.

Surge hit $1 billion in revenue last year and is raising funds at a $15 billion valuation, Reuters reported. Anthropic was most recently valued at $61.5 billion, and its Claude chatbot is widely considered a leading competitor to ChatGPT.

What’s allowed — and what’s not

Google Sheet data showed the spreadsheet was created in November 2024, and it’s referenced in updates as recent as May 2025 in other documents left public by Surge.

The list functions as a “guide” for what online sources Surge’s gig workers can and can’t use on the Anthropic project.

The list includes over 120 permitted websites from a wide range of fields, including academia, healthcare, law, and finance. It includes 10 US universities, including Harvard, Yale, Northwestern, and the University of Chicago.

It also lists popular business news sources, such as Bloomberg, PitchBook, Crunchbase, Seeking Alpha, Investing.com, and PR Newswire.

Medical information sources, such as the New England Journal of Medicine, and government sources, such as a list of UN treaties and the US National Archives, are also in the whitelist. So are university publishers like Cambridge University Press.

Here’s the full list of who’s allowed, which says that it is “not exhaustive.” And here’s the list of who is banned — or over 50 “common sources” that are “now disallowed,” as the spreadsheet puts it.

The blacklist mostly consists of media outlets like The New York Times, The Wall Street Journal, and others. It also includes other types of sources like Reddit, Stanford University, the academic publisher Wiley, and the Harvard Business Review.

The spreadsheet doesn’t explain why some sources are permitted and others are not.

The blacklist could reflect websites that made direct demands to AI companies to stop using their content, said Edward Lee, a law professor at Santa Clara University. That can happen through written requests or through an automated method like robots.txt.

Some sources in the blacklist have taken legal stances against AI companies using their content. Reddit, for example, sued Anthropic this year, saying the AI company accessed its site without permission. Anthropic has denied these claims. The New York Times sued OpenAI, and The Wall Street Journal’s parent, Dow Jones, sued Perplexity, for similar reasons.

“The Times has objected to Anthropic’s unlicensed use of Times content for AI purposes and has taken steps to block their access as part of our ongoing IP protection and enforcement efforts,” the Times spokesperson Charlie Stadtlander told BI.

“As the law and our terms of service make clear, scraping or using the Times’s content is prohibited without our prior written permission, such as a licensing agreement.”

Surge workers used the list for RLHF

Surge contractors were told to use the list for a later, but crucial, stage of AI model training in which humans rate an existing chatbot’s responses to improve them. That process is called “reinforcement learning from human feedback,” or RLHF.

The Surge contractors working for Anthropic did tasks like copying and pasting text from the internet, asking the AI to summarize it, and choosing the best summary. In another case, workers were asked to “find at least 5-10 PDFs” from the web and quiz Anthropic’s AI about the documents’ content to improve its citation skills.

That doesn’t involve feeding web data directly into the model for it to regurgitate later — the better-known process that’s known as pre-training.

Courts haven’t addressed whether there’s a clear distinction between the two processes when it comes to copyright law. There’s a good chance both would be viewed as crucial to building a state-of-the-art AI model, Lee, the law professor, said.

It is “probably not going to make a material difference in terms of fair use,” Lee said.

Have a tip? Contact this reporter via email at crollet@insider.com or Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGupshup raises $60M in equity and debt, leaves unicorn status hanging
Next Article OpenAI CEO Sam Altman Warns Of AI Voice Fraud Crisis In Banking
Advanced AI Editor
  • Website

Related Posts

Shelton’s Erica Clark wins ‘Putting Off the Peak’ at Happy Jack Peak and Chalk Mine

July 23, 2025

Anthropic co-founder makes case for proliferation of AI systems following fair use win in court

July 18, 2025

Jake Clark Stoner – Star News Group

July 15, 2025

Comments are closed.

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Discovering and using Spelke segments

July 26, 2025

Paper page – Iwin Transformer: Hierarchical Vision Transformer using Interleaved Windows

July 26, 2025

The Release Of DeepSeek Was A Win For America, Says NVIDIA CEO Jensen Huang

July 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Discovering and using Spelke segments
  • Paper page – Iwin Transformer: Hierarchical Vision Transformer using Interleaved Windows
  • The Release Of DeepSeek Was A Win For America, Says NVIDIA CEO Jensen Huang
  • Fanhua Announces Strategic Partnership with Baidu AI Cloud for Application of Large Model in Insurance Distribution – Insurance News
  • OpenAI Chairman Says Building AI Models Can ‘Destroy Your Capital’

Recent Comments

  1. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’
  3. 打开Binance账户 on Tanka CEO Kisson Lin to talk AI-native startups at Sessions: AI
  4. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  5. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.