View a PDF of the paper titled MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents, by Kaijie Zhu and 4 other authors
View PDF
HTML (experimental)
Abstract:Recent research has explored that LLM agents are vulnerable to indirect prompt injection (IPI) attacks, where malicious tasks embedded in tool-retrieved information can redirect the agent to take unauthorized actions. Existing defenses against IPI have significant limitations: either require essential model training resources, lack effectiveness against sophisticated attacks, or harm the normal utilities. We present MELON (Masked re-Execution and TooL comparisON), a novel IPI defense. Our approach builds on the observation that under a successful attack, the agent’s next action becomes less dependent on user tasks and more on malicious tasks. Following this, we design MELON to detect attacks by re-executing the agent’s trajectory with a masked user prompt modified through a masking function. We identify an attack if the actions generated in the original and masked executions are similar. We also include three key designs to reduce the potential false positives and false negatives. Extensive evaluation on the IPI benchmark AgentDojo demonstrates that MELON outperforms SOTA defenses in both attack prevention and utility preservation. Moreover, we show that combining MELON with a SOTA prompt augmentation defense (denoted as MELON-Aug) further improves its performance. We also conduct a detailed ablation study to validate our key designs. Code is available at this https URL.
Submission history
From: Kaijie Zhu [view email]
[v1]
Fri, 7 Feb 2025 18:57:49 UTC (1,452 KB)
[v2]
Thu, 1 May 2025 20:51:22 UTC (1,452 KB)
[v3]
Sat, 24 May 2025 23:01:12 UTC (1,450 KB)
[v4]
Tue, 10 Jun 2025 18:13:09 UTC (655 KB)