Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

SpotDraft, StructureFlow, BigHand, Eudia + ClausePilot – Artificial Lawyer

C3.AI Stock Is Surging Tuesday: What’s Going On? – C3.ai (NYSE:AI)

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Anthropic ships automated security reviews for Claude Code as AI-generated vulnerabilities surge

By Advanced AI EditorAugust 6, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Anthropic launched automated security review capabilities for its Claude Code platform on Wednesday, introducing tools that can scan code for vulnerabilities and suggest fixes as artificial intelligence dramatically accelerates software development across the industry.

The new features arrive as companies increasingly rely on AI to write code faster than ever before, raising critical questions about whether security practices can keep pace with the velocity of AI-assisted development. Anthropic’s solution embeds security analysis directly into developers’ workflows through a simple terminal command and automated GitHub reviews.

“People love Claude Code, they love using models to write code, and these models are already extremely good and getting better,” said Logan Graham, a member of Anthropic’s frontier red team who led development of the security features, in an interview with VentureBeat. “It seems really possible that in the next couple of years, we are going to 10x, 100x, 1000x the amount of code that gets written in the world. The only way to keep up is by using models themselves to figure out how to make it secure.”

The announcement comes just one day after Anthropic released Claude Opus 4.1, an upgraded version of its most powerful AI model that shows significant improvements in coding tasks. The timing underscores an intensifying competition between AI companies, with OpenAI expected to announce GPT-5 imminently and Meta aggressively poaching talent with reported $100 million signing bonuses.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

Why AI code generation is creating a massive security problem

The security tools address a growing concern in the software industry: as AI models become more capable at writing code, the volume of code being produced is exploding, but traditional security review processes haven’t scaled to match. Currently, security reviews rely on human engineers who manually examine code for vulnerabilities — a process that can’t keep pace with AI-generated output.

Anthropic’s approach uses AI to solve the problem AI created. The company has developed two complementary tools that leverage Claude’s capabilities to automatically identify common vulnerabilities including SQL injection risks, cross-site scripting vulnerabilities, authentication flaws, and insecure data handling.

The first tool is a /security-review command that developers can run from their terminal to scan code before committing it. “It’s literally 10 keystrokes, and then it’ll set off a Claude agent to review the code that you’re writing or your repository,” Graham explained. The system analyzes code and returns high-confidence vulnerability assessments along with suggested fixes.

The second component is a GitHub Action that automatically triggers security reviews when developers submit pull requests. The system posts inline comments on code with security concerns and recommendations, ensuring every code change receives a baseline security review before reaching production.

How Anthropic tested the security scanner on its own vulnerable code

Anthropic has been testing these tools internally on its own codebase, including Claude Code itself, providing real-world validation of their effectiveness. The company shared specific examples of vulnerabilities the system caught before they reached production.

In one case, engineers built a feature for an internal tool that started a local HTTP server intended for local connections only. The GitHub Action identified a remote code execution vulnerability exploitable through DNS rebinding attacks, which was fixed before the code was merged.

Another example involved a proxy system designed to manage internal credentials securely. The automated review flagged that the proxy was vulnerable to Server-Side Request Forgery (SSRF) attacks, prompting an immediate fix.

“We were using it, and it was already finding vulnerabilities and flaws and suggesting how to fix them in things before they hit production for us,” Graham said. “We thought, hey, this is so useful that we decided to release it publicly as well.”

Beyond addressing the scale challenges facing large enterprises, the tools could democratize sophisticated security practices for smaller development teams that lack dedicated security personnel.

“One of the things that makes me most excited is that this means security review can be kind of easily democratized to even the smallest teams, and those small teams can be pushing a lot of code that they will have more and more faith in,” Graham said.

The system is designed to be immediately accessible. According to Graham, developers can start using the security review feature within seconds of the release, requiring just about 15 keystrokes to launch. The tools integrate seamlessly with existing workflows, processing code locally through the same Claude API that powers other Claude Code features.

Inside the AI architecture that scans millions of lines of code

The security review system works by invoking Claude through an “agentic loop” that analyzes code systematically. According to Anthropic, Claude Code uses tool calls to explore large codebases, starting by understanding changes made in a pull request and then proactively exploring the broader codebase to understand context, security invariants, and potential risks.

Enterprise customers can customize the security rules to match their specific policies. The system is built on Claude Code’s extensible architecture, allowing teams to modify existing security prompts or create entirely new scanning commands through simple markdown documents.

“You can take a look at the slash commands, because a lot of times slash commands are run via actually just a very simple Claude.md doc,” Graham explained. “It’s really simple for you to write your own as well.”

The $100 million talent war reshaping AI security development

The security announcement comes amid a broader industry reckoning with AI safety and responsible deployment. Recent research from Anthropic has explored techniques for preventing AI models from developing harmful behaviors, including a controversial “vaccination” approach that exposes models to undesirable traits during training to build resilience.

The timing also reflects the intense competition in the AI space. Anthropic released Claude Opus 4.1 on Tuesday, with the company claiming significant improvements in software engineering tasks—scoring 74.5% on the SWE-Bench Verified coding evaluation, compared to 72.5% for the previous Claude Opus 4 model.

Meanwhile, Meta has been aggressively recruiting AI talent with massive signing bonuses, though Anthropic CEO Dario Amodei recently stated that many of his employees have turned down these offers. The company maintains an 80% retention rate for employees hired over the last two years, compared to 67% at OpenAI and 64% at Meta.

Government agencies can now buy Claude as enterprise AI adoption accelerates

The security features represent part of Anthropic’s broader push into enterprise markets. Over the past month, the company has shipped multiple enterprise-focused features for Claude Code, including analytics dashboards for administrators, native Windows support, and multi-directory support.

The U.S. government has also endorsed Anthropic’s enterprise credentials, adding the company to the General Services Administration’s approved vendor list alongside OpenAI and Google, making Claude available for federal agency procurement.

Graham emphasized that the security tools are designed to complement, not replace, existing security practices. “There’s no one thing that’s going to solve the problem. This is just one additional tool,” he said. However, he expressed confidence that AI-powered security tools will play an increasingly central role as code generation accelerates.

The race to secure AI-generated software before it breaks the internet

As AI reshapes software development at an unprecedented pace, Anthropic’s security initiative represents a critical recognition that the same technology driving explosive growth in code generation must also be harnessed to keep that code secure. Graham’s team, called the frontier red team, focuses on identifying potential risks from advanced AI capabilities and building appropriate defenses.

“We have always been extremely committed to measuring the cybersecurity capabilities of models, and I think it’s time that defenses should increasingly exist in the world,” Graham said. The company is particularly encouraging cybersecurity firms and independent researchers to experiment with creative applications of the technology, with an ambitious goal of using AI to “review and preventatively patch or make more secure all of the most important software that powers the infrastructure in the world.”

The security features are available immediately to all Claude Code users, with the GitHub Action requiring one-time configuration by development teams. But the bigger question looming over the industry remains: Can AI-powered defenses scale fast enough to match the exponential growth in AI-generated vulnerabilities?

For now, at least, the machines are racing to fix what other machines might break.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI is practically giving ChatGPT to the government for free
Next Article Benefits, risks & the future of patient care
Advanced AI Editor
  • Website

Related Posts

ByteDance releases new open source Seed-OSS-36B model

August 21, 2025

CodeSignal’s new AI tutoring app Cosmo wants to be the ‘Duolingo for job skills’

August 20, 2025

LLMs generate ‘fluent nonsense’ when reasoning outside their training zone

August 20, 2025

Comments are closed.

Latest Posts

Tanya Bonakdar Gallery to Close Los Angeles Space

Ancient Silver Coins Suggest New History of Trading in Southeast Asia

Sasan Ghandehari Sues Christie’s Over Picasso Once Owned by a Criminal

Dallas Museum of Art Names Brian Ferriso as Its Next Director

Latest Posts

SpotDraft, StructureFlow, BigHand, Eudia + ClausePilot – Artificial Lawyer

August 21, 2025

C3.AI Stock Is Surging Tuesday: What’s Going On? – C3.ai (NYSE:AI)

August 21, 2025

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization – Takara TLDR

August 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • SpotDraft, StructureFlow, BigHand, Eudia + ClausePilot – Artificial Lawyer
  • C3.AI Stock Is Surging Tuesday: What’s Going On? – C3.ai (NYSE:AI)
  • DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization – Takara TLDR
  • Stability AI introduces Stable Video 4D, its new AI model for 3D video generation
  • Alibaba launches open-source AI image editor

Recent Comments

  1. AlfonzoDeelt on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. NathanFairl on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. JuliusRex on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. ocenochnaya-kompaniya-517 on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips
  5. ocenochnaya-kompaniya-293 on C3 AI and Arcfield Announce Partnership to Accelerate AI Capabilities to Serve U.S. Defense and Intelligence Communities

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.