Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

Kleiner Perkins-backed Ambiq pops on IPO debut

Stability AI releases next-gen open-source Stable Diffusion 3.5 text-to-image AI model family

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI Research

OpenAI research results: ‘The longer an AI model takes to infer, the more resistant it becomes to adversarial attacks’

By Advanced AI EditorJune 12, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Jan 23, 2025 10:55:00

OpenAI has announced research results showing that the longer the inference time, the more effective the defense against adversarial attacks that intentionally confuse AI.

Trading inference-time compute for adversarial robustness | OpenAI

https://openai.com/index/trading-inference-time-compute-for-adversarial-robustness/

AI developers have been researching defenses against adversarial attacks for years because if an AI model is vulnerable to them, it could be used in ways the developer did not intend.

New research published by OpenAI suggests that the longer an AI model’s inference time — that is, the more time and resources it spends ‘thinking’ — the more robust it may be against a variety of attacks.

OpenAI used its own AI models o1-preview and o1-mini to launch attacks such as intentionally giving incorrect answers to mathematical problems, attacks using images to extract malicious answers, and ‘many-shot jailbreaking’ to confuse the AI by conveying a large amount of information at once. We investigated how the difference in inference time of each model affects the answer.

‘Many-shot jailbreaking’ is an attack method that exploits the vulnerability of AI’s ethics by bombarding it with a large number of questions and asking a problematic question at the very end – GIGAZINE

As a result, it was found that for most attack methods, the probability of success of the attack decreases as the inference time increases. Below is a heat map for ‘Many-Shot Jailbreaking’ with the Y axis representing the attacker’s resource amount and the X axis representing the inference time, and the color intensity represents the probability of attack success, ranging from 0 (green) to 1 (yellow). We can see that even if the attacker’s resource amount increases, the attack is more likely to fail if the inference time is long.

In this study, the AI model was not informed of the type of attack it was under, so OpenAI emphasized that ‘we show that robustness is improved simply by adjusting the inference time.’

However, in an attack using a prompt created for the benchmark that asks users to answer harmful information, the success rate of the attack did not decrease even if the inference time increased. In addition, it has been shown that an attacker can deceive an AI model by making it think nothing or using its inference time for unproductive purposes.

‘Defending against adversarial attacks is becoming increasingly urgent as modern AI models are used in critical applications and act as agents to take action on behalf of users. Despite years of dedicated research, the problem of adversarial attacks is far from solved, but we believe this work is a promising sign of the power of inference time,’ OpenAI said.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation
Next Article AI makes us impotent
Advanced AI Editor
  • Website

Related Posts

Anthropic closes $2.5 billion credit facility as Wall Street continues plunging money into AI boom – NBC 5 Dallas-Fort Worth

July 30, 2025

Anthropic closes $2.5 billion credit facility as Wall Street continues plunging money into AI boom – NBC 6 South Florida

July 28, 2025

Anthropic closes $2.5 billion credit facility as Wall Street continues plunging money into AI boom – NBC4 Washington

July 25, 2025
Leave A Reply

Latest Posts

Trump’s ‘Big Beautiful Bill’ Orders Museum to Relocate Space Shuttle

Thomas Kinkade Foundation Denounces DHS’s Usage of Painting

Three Convicted for Stealing Ancient Celtic Coins from German Museum

Bernard Arnault Defends U.S.-EU Trade Deal as ‘Necessary’ Agreement

Latest Posts

‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

July 31, 2025

Kleiner Perkins-backed Ambiq pops on IPO debut

July 31, 2025

Stability AI releases next-gen open-source Stable Diffusion 3.5 text-to-image AI model family

July 31, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • ‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits
  • Kleiner Perkins-backed Ambiq pops on IPO debut
  • Stability AI releases next-gen open-source Stable Diffusion 3.5 text-to-image AI model family
  • OpenAI Revenue Surges to $12 Billion Amid AI Boom
  • LangChain’s Align Evals closes the evaluator trust gap with prompt-level calibration

Recent Comments

  1. casino mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. 🔏 Security - Transfer 1.8 BTC incomplete. Fix here >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=85ce984e332839165eff00f10a4fc17a& 🔏 on The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
  3. 💾 System: Transfer 0.5 Bitcoin incomplete. Verify now >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=e1378433e58a7b696e3632102c97ef63& 💾 on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  4. 📞 Security; Transaction 0.5 BTC failed. Verify now => https://graph.org/OBTAIN-CRYPTO-07-23?hs=ec8b72524f993be230f3c8fd50d7bbae& 📞 on OpenAI Five: Dota Gameplay
  5. 📨 System: Transfer 0.5 Bitcoin on hold. Verify now => https://graph.org/OBTAIN-CRYPTO-07-23?hs=b25dab3fe579278f363cd6d123369e86& 📨 on New ChatGPT voice mode updates ⬇️

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.