Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Unraveling the cognitive patterns of Large Language Models through module communities – Takara TLDR

AI models may be accidentally (and secretly) learning each other’s bad behaviors

Anthropic Warns of Hacker Weaponizing Claude AI Like Never Before

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Anthropic Says Agentic AI Has Been ‘Weaponized’ for Hacking

By Advanced AI EditorAugust 27, 2025No Comments2 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI isn’t just helping white-collar workers be more productive — it’s also aiding white-collar criminals.

Anthropic said in a Wednesday report that it detected and thwarted cybercriminals attempting to carry out hacks using the startup’s AI tool, Claude.

While AI has been used in hacking efforts for years, Anthropic said advances in the technology mean it’s being used to “perform” cyberattacks throughout the entire operation — and with smaller teams.

“Agentic AI has been weaponized,” the startup said.

Anthropic’s suite of Claude coding tools is widely used in the tech world, including at Meta.

They can help novice coders create software with simple prompts or help more experienced software engineers be more productive. According to Anthropic’s report, the same is true for cybercrime.

AI means hackers no longer require the same level of technical expertise because they can instruct tools like Claude to create malicious code, Anthropic said, describing it as “vibe hacking.”

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

In one example, a cybercriminal used Claude code to “commit large-scale theft and extortion of personal data” and then attempted to extort victims into paying to prevent the data from being leaked. The hacker used Claude for reconnaissance of targets, automating attacks, calculating ransom fees, and generating “visually alarming ransom notes,” the startup said.

In another example, a cybercriminal used Claude to develop ransomware, a type of software that encrypts a target’s files and demands payment to unlock them. The cybercriminal sold the ransomware packages on internet forums for up to $1,200, Anthropic said.

“Without Claude’s assistance, they could not implement or troubleshoot core malware components,” Anthropic said of the hacker, adding that it banned the associated account and reported it to relevant parties.

Anthropic said it was sharing its findings so that other researchers and organizations could “strengthen their own defenses against the abuse of AI systems.” It added that it had also implemented new ways to detect misuse of its tools.

The Amazon-backed startup is in talks to raise about $5 billion in funding at a $170 billion valuation, Business Insider reported earlier this month.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic forms national security advisory council to guide AI use in government
Next Article Disney, Foxconn, Hyundai, Lilly, SAP, Siemens, TSMC, others – all-in on new Nvidia AI platform
Advanced AI Editor
  • Website

Related Posts

Anthropic Warns of Hacker Weaponizing Claude AI Like Never Before

August 27, 2025

Is Claude AI the ‘ChatGPT killer’? Anthropic brings its agent to Chrome

August 27, 2025

Anthropic’s AI agent now browses for you

August 27, 2025

Comments are closed.

Latest Posts

Claire Oliver Gallery Expands in New York’s Harlem Neighborhood

AWAW and NYFA Award $521,125 in Environmental Art Grants

A Well-Preserved Roman Mausoleum Unearthed in France

France Will Return Colonial-Era Human Remains to Madagascar

Latest Posts

Unraveling the cognitive patterns of Large Language Models through module communities – Takara TLDR

August 27, 2025

AI models may be accidentally (and secretly) learning each other’s bad behaviors

August 27, 2025

Anthropic Warns of Hacker Weaponizing Claude AI Like Never Before

August 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Unraveling the cognitive patterns of Large Language Models through module communities – Takara TLDR
  • AI models may be accidentally (and secretly) learning each other’s bad behaviors
  • Anthropic Warns of Hacker Weaponizing Claude AI Like Never Before
  • New AI forecast model shines during Hurricane Erin
  • Claire Oliver Gallery Expands in New York’s Harlem Neighborhood

Recent Comments

  1. Read Full Report on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. JamesRHYMN on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. HarveycoR on Foundation AI: Cisco launches AI model for integration in security applications
  5. Charliecep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.