Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Inside the Navy’s DoN GPT tool; Claude, Llama AI tools can now be used with sensitive data in Amazon’s government cloud

Anthropic’s newest Claude AI models are experts at programming

Beyond GPT architecture: Why Google’s Diffusion approach could reshape LLM deployment

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Security Experts Warn All Major LLMs Can Be Deceived to Produce Malicious Content Using a Simple Universal Prompt
Alibaba Cloud (Qwen)

Security Experts Warn All Major LLMs Can Be Deceived to Produce Malicious Content Using a Simple Universal Prompt

Advanced AI BotBy Advanced AI BotApril 26, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A new threat to large language models (LLMs) is on the rise after a security researcher at HiddenLayer was able to highlight a concerning feature.

They claim that a single universal prompt on the LLM can give rise to malicious content without users even realizing it. All the top models in the industry, including ChatGPT, Llama, Deepseek, Qwen, Copilot, Gemini, and Mistral, were said to be vulnerable to the tactic that is novel. Therefore, researchers are raising the alarm by calling it Policy Puppetry Prompt Injection.

The single universal prompt makes chatbots give instructions on how to enrich uranium, produce bombs, or even give rise to methamphetamine at home. This exploits the systemic weakness, which has to do with the figure of LLMs trained using instructions or policy data. So this is very hard to fix.

The malicious prompt features several things as a whole. This includes getting formatted similar to policy files like XML, JSON, and INI. It would end up tricking the chatbot into subverting the commands.

Attackers get the chance to simply bypass the system prompts and any kind of safety measures in place that are trained into these models. Instructions don’t need to be in a certain policy language. However, it was noted that these prompts are produced in a manner that the highlighted LLM could interpret any policy.

Secondly, some very dangerous requests can be rewritten using leetspeak. This gets rid of letters with similarly appearing figures or digits. As per researchers, reasoning models that were more modern than their counterparts needed more difficult prompts to give rise to consistent answers. Amongst those included are Gemini 2.5 and ChatGPT o1.

The last prompt entails well-known roleplaying methods that feature directing the model to take on certain roles, jobs, and features within fictional settings. Despite specific training to let go of all user requests and instructing them to produce dangerous content, all the major models fell victim to this attack. More importantly, the system was designed to extract complete system prompts.

The paper shared how the chatbots can monitor dangerous material with ease. External monitoring is needed to highlight and respond to dangerous injection attacks taking place in real time.

The visibility of several repetitive universal bypasses gives rise to attackers no longer requiring complex knowledge for the attacks or adjusting attacks for every specific model. Anyone having a keyboard could ask the dangerous prompt, produce anthrax, and take complete control over the model, the researchers shared.

The study also warned that there was a clear need for security tools and detection techniques to ensure these chatbots remain safe and guarded at all times.

Read next: Microsoft’s AI Assistant Copilot Struggles To Mark Its Territory As Competition Heats Up



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleXJTLU-Baidu AI joint venture launched
Next Article DeepSeek’s decline? Baidu’s Robin Li reveals problem with text-only AI!
Advanced AI Bot
  • Website

Related Posts

Alibaba Co-Founder Sees Open-Source Qwen Driving Cloud Demand – Alibaba Gr Hldgs (NYSE:BABA)

June 15, 2025

Alibaba Co-Founder Sees Open-Source Qwen Driving Cloud Demand – Alibaba Gr Hldgs (NYSE:BABA)

June 15, 2025

Alibaba Co-Founder Sees Open-Source Qwen Driving Cloud Demand – Alibaba Gr Hldgs (NYSE:BABA)

June 15, 2025
Leave A Reply Cancel Reply

Latest Posts

Zegna’s SS ‘26 Dubai Show Is A Vision For A Slow, Quiet Luxury Legacy

Love At First Stitch – One Woman’s Journey Preserving The Art Of Ralli

Los Angeles’ ‘No Kings’ Rally Showcases Handmade Protest Art

Why Annahstasia’s Brilliant Debut Is The Album We Need Now

Latest Posts

Inside the Navy’s DoN GPT tool; Claude, Llama AI tools can now be used with sensitive data in Amazon’s government cloud

June 15, 2025

Anthropic’s newest Claude AI models are experts at programming

June 15, 2025

Beyond GPT architecture: Why Google’s Diffusion approach could reshape LLM deployment

June 15, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.