Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
It’s no longer a hypothetical: Anthropic has discovered a hacker using its AI chatbot to plan and execute a large-scale data extortion campaign that targeted 17 organizations last month.
The San Francisco company says an unnamed hacker “used AI to what we believe is an unprecedented degree,” by automating large portions of the hacking spree using Claude AI.
“This threat actor leveraged Claude’s code execution environment to automate reconnaissance, credential harvesting, and network penetration at scale, potentially affecting at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions,” Anthropic said on Wednesday. A defense contractor was also affected.
The company disclosed the incident in a new threat intelligence report documenting its efforts to prevent cybercriminals and state-sponsored hackers from exploiting Claude. However, the same report also warns about an unsettling “evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator,” enabling human hackers to pull off attacks they would have never achieved alone.
(Credit: Anthropic)
In the data theft extortion case, the hacker abused Claude Code, a tool for programmers, to help them breach and steal “personal records, including healthcare data, financial information, government credentials, and other sensitive information” from the targeted organizations.
“Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines,” Anthropic added, noting the ransom amounts ranged from $75,000 to over $500,000 in bitcoin.
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Although Claude was built with safeguards to prevent such misuse, the hacker bypassed the guardrails by uploading a configuration file to the AI that “included a cover story claiming network security testing under official support contracts while providing detailed attack methodologies and target prioritization frameworks,” Anthropic found.
(Credit: Anthropic)
During the campaign, the hacker first used Claude to scan for vulnerable networks at “high success rates” before breaching them, which appears to include brute-forcing access through plugging in credentials. In another disturbing find, Claude also created malware and other custom tools to evade Windows Defender during the intrusion attempts.
The incident stands out from earlier findings where hackers only used generative AI for a specific task, such as writing a phishing email, providing coding help, or conducting vulnerability research. “AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out,” Anthropic added.
Recommended by Our Editors
In response, the company banned the accounts the hacker used to access Claude. Anthropic also said it “developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future.”
Still, the company expects more hackers to adopt AI chatbots in the same way, which risks unleashing more cybercrime. In the same threat intelligence report, Anthropic said it discovered a separate, possibly amateur hacker using Claude to develop, market, and sell several variants of ransomware.
“This actor appears to have been dependent on AI to develop functional malware. Without Claude’s assistance, they could not implement or troubleshoot core malware components,” the company added.
On Tuesday, ESET also discovered a mysterious ransomware that harnesses OpenAI’s open-source model to generate malicious code on infected devices.

5 Ways to Get More Out of Your ChatGPT Conversations
About Michael Kan
Senior Reporter
