Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » How Google DeepMind’s CaMeL Architecture Aims to Block LLM Prompt Injections
Google DeepMind

How Google DeepMind’s CaMeL Architecture Aims to Block LLM Prompt Injections

Advanced AI BotBy Advanced AI BotApril 27, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Google DeepMind researchers are proposing a different way to secure Large Language Model (LLM) agents against manipulation, moving beyond model training or simple filters to an architectural defense called CaMeL (Capabilities for Machine Learning).

Detailed in a paper published on arXiv, CaMeL applies established software security ideas like capability tracking and control flow integrity to shield LLM agents interacting with potentially malicious external data, aiming to prevent data theft or unintended actions orchestrated through prompt injection attacks.

The Persistent Problem of Prompt Injection

Despite ongoing efforts across the industry, LLMs remain susceptible to various forms of prompt injection. Security researchers highlighted vulnerabilities in OpenAI’s multimodal GPT-4V back in October 2023, where instructions hidden within images could manipulate the model.

More recently, security researcher Johann Rehberger demonstrated exploits against memory functions of models like Google’s Gemini Advanced (February 2025) and previously OpenAI’s ChatGPT (September 2024), showing how indirect prompt injection, slipped into documents or emails processed by the agent, could be used to corrupt stored data or siphon information. These incidents underscore the challenge of creating truly robust defenses against adversaries who hide malicious commands within seemingly benign data inputs.

Building Security Around the LLM

CaMeL’s architecture tackles this by treating the core LLM components as potentially untrustworthy black boxes and building a secure execution environment around them. It refines the “Dual LLM” pattern, an approach discussed by experts like Simon Willison who also helped identify the “prompt injection” threat class back in 2022.

CaMeL has its Privileged LLM generate Python code representing the user’s intended task. This code is then executed by a custom interpreter, rather than having the LLM directly orchestrate tool calls. This interpreter becomes the control center. When the code needs to interact with untrusted data (like summarizing an email), it invokes a Quarantined LLM which simply parses or extracts information based on a defined schema, without having the ability to call external tools itself.

Crucially, the interpreter tracks “capabilities” associated with every piece of data – metadata detailing its origin and permitted uses. Capabilities, in this security context, function like fine-grained, unforgeable tags attached to data specifying its provenance (e.g., user input, a specific tool) and access rights (e.g., which users or tools can read it), drawing inspiration from established computer security systems.

“CaMeL associates, to every value, some metadata (commonly called capabilities in the software security literature) to restrict data and control flows,” the researchers explain in their paper, enabling detailed security rules based on principles like Information Flow Control (tracking data propagation) and Control Flow Integrity (ensuring execution follows permitted paths). Before executing any function that interacts with the outside world (a tool call), the interpreter consults predefined security policies, checking if the data’s capabilities permit the intended action.

Evaluating CaMeL’s Effectiveness

To evaluate CaMeL’s effectiveness, the DeepMind team utilized AgentDojo, a benchmark specifically designed for assessing agent security across tasks in domains like Workspace, Banking, Travel, and Slack.

Their findings indicate CaMeL successfully defended against 67% of the benchmark’s prompt injection attacks across various models, often reducing successful attacks to zero for models like GPT-4o, even without highly specific security policies in place. This performance contrasts with other defense mechanisms evaluated on the same benchmark, such as spotlighting (input filtering) or prompt sandwiching (repeating instructions).

This security layer isn’t free. The analysis showed CaMeL typically requires around 2.7 to 2.8 times more tokens (both input and output) on average compared to standard LLM tool use, primarily because the Privileged LLM might need several attempts to generate error-free Python code for the interpreter.

Agent utility saw some impact, particularly in tasks requiring interaction with poorly documented external APIs, though more advanced LLMs performed better, suggesting improvements are possible as base models evolve. A notable strength highlighted is CaMeL’s approach of “not relying on more AI to solve AI problems,” contrasting it with probabilistic defenses that may achieve high but imperfect detection rates.

Beyond Standard Injection: Side Channels and Rogue Tools

The DeepMind researchers acknowledge CaMeL is not a complete solution. Side-channel attacks, where an attacker infers information by observing system behavior rather than accessing data directly, remain a concern.

The paper details how an adversary might infer private data by observing indirect consequences, for example, by checking if an external resource is accessed repeatedly in a loop whose iteration count depends on the private value, or by seeing if program execution halts due to an error triggered conditionally on the secret information.

To counter some of these risks, CaMeL includes a “STRICT” interpretation mode which enforces tighter data dependency tracking for control flow statements, making operations within loops or conditionals dependent on the condition variable itself. This offers stronger protection but potentially requires more user confirmations for actions involving sensitive data, risking user fatigue.

The paper also suggests CaMeL’s architecture, by controlling tool execution and data flow, might offer potential defenses against threats beyond standard prompt injection, such as a rogue user attempting to misuse the agent to violate policy or a malicious “spy tool” trying to passively exfiltrate data processed by the agent, scenarios discussed in Section 7 of the paper.

While other industry players like Microsoft have deployed defenses like Azure AI Studio’s Prompt Shields (first previewed April 2024) using filtering techniques, CaMeL represents a distinct, architecture-first approach. As AI agents become more autonomous – a future anticipated by industry experts like Anthropic’s CISO Jason Clinton who recently projected the arrival of “virtual employee” agents – such structured security architectures may become increasingly necessary.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleZiff Davis and IGN sue OpenAI for copyright infringement
Next Article Claude AI has a moral code, Anthropic study finds
Advanced AI Bot
  • Website

Related Posts

Darren Aronofsky’s First Gen-AI Film Goes Inside the Womb

June 17, 2025

Darren Aronofsky’s First Gen-AI Film Goes Inside the Womb

June 17, 2025

Anysphere launches a $200-a-month Cursor AI coding subscription

June 17, 2025
Leave A Reply Cancel Reply

Latest Posts

Israeli Attacks on Palestinian Heritage Constitute War Crimes: Report

Major Gift to National Gallery of Canada, and More

14 Gigs To Book Now For Montreal Jazz Festival 2025

Independent Art Fair Moves to Pier 36 with Expanded Format for 2026

Latest Posts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 17, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 17, 2025

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

June 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.