Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

MovieCORE: COgnitive REasoning in Movies – Takara TLDR

China AI Chip Leader Cambricon Posts Record Profit as DeepSeek Drives Demand_the_to

Joyson Electronics, Alibaba Cloud form AI partnership for embodied robotics

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

By Advanced AI EditorAugust 27, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Anthropic has begun testing a Chrome browser extension that allows its Claude AI assistant to take control of users’ web browsers, marking the company’s entry into an increasingly crowded and potentially risky arena where artificial intelligence systems can directly manipulate computer interfaces.

The San Francisco-based AI company announced Tuesday that it would pilot “Claude for Chrome” with 1,000 trusted users on its premium Max plan, positioning the limited rollout as a research preview designed to address significant security vulnerabilities before wider deployment. The cautious approach contrasts sharply with more aggressive moves by competitors OpenAI and Microsoft, who have already released similar computer-controlling AI systems to broader user bases.

The announcement underscores how quickly the AI industry has shifted from developing chatbots that simply respond to questions toward creating “agentic” systems capable of autonomously completing complex, multi-step tasks across software applications. This evolution represents what many experts consider the next frontier in artificial intelligence — and potentially one of the most lucrative, as companies race to automate everything from expense reports to vacation planning.

How AI agents can control your browser but hidden malicious code poses serious security threats

Claude for Chrome allows users to instruct the AI to perform actions on their behalf within web browsers, such as scheduling meetings by checking calendars and cross-referencing restaurant availability, or managing email inboxes and handling routine administrative tasks. The system can see what’s displayed on screen, click buttons, fill out forms, and navigate between websites — essentially mimicking how humans interact with web-based software.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

“We view browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you’re looking at, click buttons, and fill forms will make it substantially more useful,” Anthropic stated in its announcement.

However, the company’s internal testing revealed concerning security vulnerabilities that highlight the double-edged nature of giving AI systems direct control over user interfaces. In adversarial testing, Anthropic found that malicious actors could embed hidden instructions in websites, emails, or documents to trick AI systems into harmful actions without users’ knowledge—a technique called prompt injection.

Without safety mitigations, these attacks succeeded 23.6% of the time when deliberately targeting the browser-using AI. In one example, a malicious email masquerading as a security directive instructed Claude to delete the user’s emails “for mailbox hygiene,” which the AI obediently executed without confirmation.

“This isn’t speculation: we’ve run ‘red-teaming’ experiments to test Claude for Chrome and, without mitigations, we’ve found some concerning results,” the company acknowledged.

OpenAI and Microsoft rush to market while Anthropic takes measured approach to computer-control technology

Anthropic’s measured approach comes as competitors have moved more aggressively into the computer-control space. OpenAI launched its “Operator” agent in January, making it available to all users of its $200-per-month ChatGPT Pro service. Powered by a new “Computer-Using Agent” model, Operator can perform tasks like booking concert tickets, ordering groceries, and planning travel itineraries.

Microsoft followed in April with computer use capabilities integrated into its Copilot Studio platform, targeting enterprise customers with UI automation tools that can interact with both web applications and desktop software. The company positioned its offering as a next-generation replacement for traditional robotic process automation (RPA) systems.

The competitive dynamics reflect broader tensions in the AI industry, where companies must balance the pressure to ship cutting-edge capabilities against the risks of deploying insufficiently tested technology. OpenAI’s more aggressive timeline has allowed it to capture early market share, while Anthropic’s cautious approach may limit its competitive position but could prove advantageous if safety concerns materialize.

“Browser-using agents powered by frontier models are already emerging, making this work especially urgent,” Anthropic noted, suggesting the company feels compelled to enter the market despite unresolved safety issues.

Why computer-controlling AI could revolutionize enterprise automation and replace expensive workflow software

The emergence of computer-controlling AI systems could fundamentally reshape how businesses approach automation and workflow management. Current enterprise automation typically requires expensive custom integrations or specialized robotic process automation software that breaks when applications change their interfaces.

Computer-use agents promise to democratize automation by working with any software that has a graphical user interface, potentially automating tasks across the vast ecosystem of business applications that lack formal APIs or integration capabilities.

Salesforce researchers recently demonstrated this potential with their CoAct-1 system, which combines traditional point-and-click automation with code generation capabilities. The hybrid approach achieved a 60.76% success rate on complex computer tasks while requiring significantly fewer steps than pure GUI-based agents, suggesting substantial efficiency gains are possible.

“For enterprise leaders, the key lies in automating complex, multi-tool processes where full API access is a luxury, not a guarantee,” explained Ran Xu, Director of Applied AI Research at Salesforce, pointing to customer support workflows that span multiple proprietary systems as prime use cases.

University researchers release free alternative to Big Tech’s proprietary computer-use AI systems

The dominance of proprietary systems from major tech companies has prompted academic researchers to develop open alternatives. The University of Hong Kong recently released OpenCUA, an open-source framework for training computer-use agents that rivals the performance of proprietary models from OpenAI and Anthropic.

The OpenCUA system, trained on over 22,600 human task demonstrations across Windows, macOS, and Ubuntu, achieved state-of-the-art results among open-source models and performed competitively with leading commercial systems. This development could accelerate adoption by enterprises hesitant to rely on closed systems for critical automation workflows.

Anthropic’s safety testing reveals AI agents can be tricked into deleting files and stealing data

Anthropic has implemented several layers of protection for Claude for Chrome, including site-level permissions that allow users to control which websites the AI can access, mandatory confirmations before high-risk actions like making purchases or sharing personal data, and blocking access to categories like financial services and adult content.

The company’s safety improvements reduced prompt injection attack success rates from 23.6% to 11.2% in autonomous mode, though executives acknowledge this remains insufficient for widespread deployment. On browser-specific attacks involving hidden form fields and URL manipulation, new mitigations reduced the success rate from 35.7% to zero.

However, these protections may not scale to the full complexity of real-world web environments, where new attack vectors continue to emerge. The company plans to use insights from the pilot program to refine its safety systems and develop more sophisticated permission controls.

“New forms of prompt injection attacks are also constantly being developed by malicious actors,” Anthropic warned, highlighting the ongoing nature of the security challenge.

The rise of AI agents that click and type could fundamentally reshape how humans interact with computers

The convergence of multiple major AI companies around computer-controlling agents signals a significant shift in how artificial intelligence systems will interact with existing software infrastructure. Rather than requiring businesses to adopt new AI-specific tools, these systems promise to work with whatever applications companies already use.

This approach could dramatically lower the barriers to AI adoption while potentially displacing traditional automation vendors and system integrators. Companies that have invested heavily in custom integrations or RPA platforms may find their approaches obsoleted by general-purpose AI agents that can adapt to interface changes without reprogramming.

For enterprise decision-makers, the technology presents both opportunity and risk. Early adopters could gain significant competitive advantages through improved automation capabilities, but the security vulnerabilities demonstrated by companies like Anthropic suggest caution may be warranted until safety measures mature.

The limited pilot of Claude for Chrome represents just the beginning of what industry observers expect to be a rapid expansion of computer-controlling AI capabilities across the technology landscape, with implications that extend far beyond simple task automation to fundamental questions about human-computer interaction and digital security.

As Anthropic noted in its announcement: “We believe these developments will open up new possibilities for how you work with Claude, and we look forward to seeing what you’ll create.” Whether those possibilities ultimately prove beneficial or problematic may depend on how successfully the industry addresses the security challenges that have already begun to emerge.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMeta to spend tens of millions on pro-AI super PAC
Next Article MIT breakthrough: AI designs two new antibiotics to tackle drug-resistant superbugs
Advanced AI Editor
  • Website

Related Posts

How procedural memory can cut the cost and complexity of AI agents

August 26, 2025

Enterprise leaders say recipe for AI agents is matching them to existing processes — not the other way around

August 26, 2025

Gemini Nano Banana improves image editing consistency and control at scale for enterprises – but is not perfect

August 26, 2025

Comments are closed.

Latest Posts

A Well-Preserved Roman Mausoleum Unearthed in France

France Will Return Colonial-Era Human Remains to Madagascar

Vail Settles with Native American Artist in Suit on Pro-Palestine Art

Met Museum Plans Major Raphael Exhibition for 2026

Latest Posts

MovieCORE: COgnitive REasoning in Movies – Takara TLDR

August 27, 2025

China AI Chip Leader Cambricon Posts Record Profit as DeepSeek Drives Demand_the_to

August 27, 2025

Joyson Electronics, Alibaba Cloud form AI partnership for embodied robotics

August 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • MovieCORE: COgnitive REasoning in Movies – Takara TLDR
  • China AI Chip Leader Cambricon Posts Record Profit as DeepSeek Drives Demand_the_to
  • Joyson Electronics, Alibaba Cloud form AI partnership for embodied robotics
  • Is Claude AI the ‘ChatGPT killer’? Anthropic brings its agent to Chrome
  • Should the IBM Quantum Partnership Prompt Action From Advanced Micro Devices (AMD) Investors?

Recent Comments

  1. Henryclisk on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. filmswen on How Cursor and Claude Are Developing AI Coding Tools Together
  3. LanceRap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Akiho Yoshizawa on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Henryclisk on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.