Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Just 4 days and 10 tables left to exhibit at Disrupt 2025

Consumer confidence dips on anxiety about jobs, income: The Conference Board

Notice of Privacy

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Hackers Manipulate Claude AI Chatbot as Part of at Least 17 Cyber Attacks

By Advanced AI EditorSeptember 2, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A recent campaign of cyber attacks made new and novel use of the Claude AI chatbot in scanning VPN endpoints and automating multiple portions of the attack cycle, representing another step forward in the deployment of LLMs for malicious purposes.

The attackers used development tool Claude Code to automate reconnaissance, credential harvesting, security evasion and data exfiltration among other aspects of their ransom campaign that victimized at least 17 organizations across a variety of industries. While this is not the first use of the Claude AI chatbot as an assistant in cyber attacks, it is far more complex and multifaceted than previous documented instances.

Weaponization of Claude AI chatbot significantly lowers technical barriers for cyber crime

The news comes from Anthropic’s own Threat Intelligence report on recent malicious uses of its Claude AI chatbot; the campaign of successful ransom-based cyber attacks, dubbed “GTG-2002,” is far and away the most advanced and complex of these.

The campaign of cyber attacks impacted at least 17 organizations across multiple countries and industries including government, healthcare, emergency services, and religious institutions. The attacker compiled a list of their preferential responses in a “claude.md” file using Claude Code, but this served as a guide for the Claude AI chatbot rather than as a comprehensive ruleset.

The AI was able to make “tactical and strategic” decisions during its operations about things like how to prioritize potential vulnerabilities revealed during its explorations, which types of data to exfiltrate, and how to best optimize extortion demands for the circumstances. The attacker was able to demand ransoms of up to $500,000 in some cases.

The Claude AI chatbot’s initial function was to scan thousands of VPN endpoints for potentially vulnerable systems with likely high success rates, further filtering them by country of location and technology type. In terms of credential exploitation, the chatbot was directed to extract and analyze multiple types of credential sets during these operations and to access Active Directory systems and perform comprehensive network enumeration. It was even induced to create malware, such as an obfuscated version of the Chisel tunneling tool to evade Windows Defender detection.

When the Claude AI chatbot noted that security software was identifying its approach, it was able to adapt by introducing new techniques such as string encryption, anti-debugging code, and filename masquerading. And once its cyber attacks penetrated victim systems, it was able to identify and systematically extract sensitive data such as financial information and Social Security numbers. The stolen data was then sorted for the attacker according to its monetization potential.

As far as the ransom notes go, the Claude AI chatbot would automatically research and incorporate personalized elements such as industry-specific regulations, business employee counts and financial details, and ransom demands (ranging from $75,000 to $500,000 USD) depending on the victim’s size and individual circumstances.

Cyber attacks increasingly automated, accessible to non-technical actors

While AI’s use in hacking has largely been a case of hype over actual threat to present, this new development is a concrete indicator that it is at minimum now substantially lowering the threshold for non-technical actors to execute viable cyber attacks. It is also clearly capable of speeding up and automating certain common aspects of attacks for the more polished professional hackers, increasing their output capability during windows in which they have the element of surprise and novelty.

While the GTG-2002 activity is the most complex thus far, the threat report notes the Claude AI chatbot has also been successfully used for more individualized components of various cyber attacks. This includes use by suspected North Korean state-sponsored hackers as part of their remote IT worker scams, to include not just crafting detailed personas but also taking employment tests and doing day-to-day work once hired. Another highly active party in the UK has been using Claude to develop individual ransomware tools with sophisticated capabilities and sell them on underground forums, at a price of $400 to $1,200 each.

The news also comes just days after disclosure of an OpenAI GPT model being used to create a viable strain of ransomware called “PromptLock.” The script is in use in the wild by threat actors and currently has the ability to encrypt and exfiltrate data, and researchers say that the attackers are in the process of upgrading it to also destroy files. Though researchers think the attackers are likely still testing it rather than actively attempting to exploit targets with it, it already has the potential to be used for that purpose.

Anthropic says that it has responded to the cyber attacks by adding a tailored classifier specifically for the observed activity and a new detection method to ensure similar activity is captured by the standard security pipeline. Steve Povolny, Senior Director of Security Research at Exabeam, notes that these developments are just the likely opening moves in what will be an ongoing “arms race” between offensive and defensive AI capability: “While the size and scope of the Anthropic AI cybercrime spree may grab your attention, the reality is that threat actors have been leveraging foundational models to conduct cybercrime for years now. It sounds shocking that modern LLMs can be used to orchestrate all parts of a modern ransomware campaign, but the reality is it’s not difficult to do this, when the attacker breaks the attack up into small task-driven pieces. For example, it’s extremely unlikely that Claude would provide anything valuable if you asked it to write some ransomware that will target companies who are explicitly vulnerable. However, if you were to describe in your “vibe coding” prompt that you are writing an enterprise encryption tool to test and strengthen your companies encryption policies, you can very easily adapt it or use it for nefarious purposes. In the same way, an attacker can “trick” the model into thinking it’s building a threat intelligence company profile, which will be easily applied to profiling vulnerable targets for extortion.”

“Ultimately, it’s exceptionally difficult for an LLM to recognize what something is intended to be used for, while still be valuable to a wide variety of users and applications. We have to simply assume that attackers can construct large-scale, specific and complex attack scenarios with dramatically increased speed, in the same way that non-coders can now create enterprise applications and services with little to no prior knowledge. With this in mind, the focus should be less on “why didn’t the model properly recognize the malicious intentions of the user” and more on “how can we use the same technology breakthroughs to test, improve and harden cyber defenses.” The reality is that the attack methods haven’t fundamentally changed that much; it’s just a whole lot easier, faster and cheaper for attackers,” added Povolny.

 



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleYour ChatGPT Conversations Are Not Safe, OpenAI Admits
Next Article How Can Input Reformulation Improve Tool Usage Accuracy in a Complex Dynamic Environment? A Study on τ-bench – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Claude.AI teaches users ins, outs of programming code | The Arkansas Democrat-Gazette

August 30, 2025

How Grok, ChatGPT, Claude, Perplexity, and Gemini handle your data for AI training

August 30, 2025

Anthropic to Collect Your Chats, Coding Sessions to Train Claude AI

August 29, 2025

Comments are closed.

Latest Posts

80 Museum Exhibitions and Biennials to See in Fall 2025

Woodmere Art Museum Sues Trump Administration Over Canceled IMLS Grant

Barbara Gladstone’s Chelsea Townhouse in NYC Sells for $13.1 M.

Trump Meets with Smithsonian Leader Amid Threats of Content Review

Latest Posts

Just 4 days and 10 tables left to exhibit at Disrupt 2025

September 2, 2025

Consumer confidence dips on anxiety about jobs, income: The Conference Board

September 2, 2025

Notice of Privacy

September 2, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Just 4 days and 10 tables left to exhibit at Disrupt 2025
  • Consumer confidence dips on anxiety about jobs, income: The Conference Board
  • Notice of Privacy
  • Chinese delivery giant Meituan unleashes open-source AI model to take on Alibaba, DeepSeek
  • Google AI Mode model improved for complex STEM questions, says Google

Recent Comments

  1. ラブドール on 6 charts that capture Nvidia’s AI-fueled rise
  2. ラブドール on 🧿 Worldcoin freeze. 🏦 AI banking rules . 🤑 Local tech-giants going profitable.
  3. ラブドール on [2406.16386] Automatically Generating UI Code from Screenshot: A Divide-and-Conquer-Based Approach
  4. Samuelsib on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Garthfer on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.