Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
Anthropic has begun testing a Chrome browser extension that allows its Claude AI assistant to take control of users’ web browsers, marking the company’s entry into an increasingly crowded and potentially risky arena where artificial intelligence systems can directly manipulate computer interfaces.
The San Francisco-based AI company announced Tuesday that it would pilot “Claude for Chrome” with 1,000 trusted users on its premium Max plan, positioning the limited rollout as a research preview designed to address significant security vulnerabilities before wider deployment. The cautious approach contrasts sharply with more aggressive moves by competitors OpenAI and Microsoft, who have already released similar computer-controlling AI systems to broader user bases.
The announcement underscores how quickly the AI industry has shifted from developing chatbots that simply respond to questions toward creating “agentic” systems capable of autonomously completing complex, multi-step tasks across software applications. This evolution represents what many experts consider the next frontier in artificial intelligence — and potentially one of the most lucrative, as companies race to automate everything from expense reports to vacation planning.
How AI agents can control your browser but hidden malicious code poses serious security threats
Claude for Chrome allows users to instruct the AI to perform actions on their behalf within web browsers, such as scheduling meetings by checking calendars and cross-referencing restaurant availability, or managing email inboxes and handling routine administrative tasks. The system can see what’s displayed on screen, click buttons, fill out forms, and navigate between websites — essentially mimicking how humans interact with web-based software.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
“We view browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you’re looking at, click buttons, and fill forms will make it substantially more useful,” Anthropic stated in its announcement.
However, the company’s internal testing revealed concerning security vulnerabilities that highlight the double-edged nature of giving AI systems direct control over user interfaces. In adversarial testing, Anthropic found that malicious actors could embed hidden instructions in websites, emails, or documents to trick AI systems into harmful actions without users’ knowledge—a technique called prompt injection.
Without safety mitigations, these attacks succeeded 23.6% of the time when deliberately targeting the browser-using AI. In one example, a malicious email masquerading as a security directive instructed Claude to delete the user’s emails “for mailbox hygiene,” which the AI obediently executed without confirmation.
“This isn’t speculation: we’ve run ‘red-teaming’ experiments to test Claude for Chrome and, without mitigations, we’ve found some concerning results,” the company acknowledged.
OpenAI and Microsoft rush to market while Anthropic takes measured approach to computer-control technology
Anthropic’s measured approach comes as competitors have moved more aggressively into the computer-control space. OpenAI launched its “Operator” agent in January, making it available to all users of its $200-per-month ChatGPT Pro service. Powered by a new “Computer-Using Agent” model, Operator can perform tasks like booking concert tickets, ordering groceries, and planning travel itineraries.
Microsoft followed in April with computer use capabilities integrated into its Copilot Studio platform, targeting enterprise customers with UI automation tools that can interact with both web applications and desktop software. The company positioned its offering as a next-generation replacement for traditional robotic process automation (RPA) systems.
The competitive dynamics reflect broader tensions in the AI industry, where companies must balance the pressure to ship cutting-edge capabilities against the risks of deploying insufficiently tested technology. OpenAI’s more aggressive timeline has allowed it to capture early market share, while Anthropic’s cautious approach may limit its competitive position but could prove advantageous if safety concerns materialize.
“Browser-using agents powered by frontier models are already emerging, making this work especially urgent,” Anthropic noted, suggesting the company feels compelled to enter the market despite unresolved safety issues.
Why computer-controlling AI could revolutionize enterprise automation and replace expensive workflow software
The emergence of computer-controlling AI systems could fundamentally reshape how businesses approach automation and workflow management. Current enterprise automation typically requires expensive custom integrations or specialized robotic process automation software that breaks when applications change their interfaces.
Computer-use agents promise to democratize automation by working with any software that has a graphical user interface, potentially automating tasks across the vast ecosystem of business applications that lack formal APIs or integration capabilities.
Salesforce researchers recently demonstrated this potential with their CoAct-1 system, which combines traditional point-and-click automation with code generation capabilities. The hybrid approach achieved a 60.76% success rate on complex computer tasks while requiring significantly fewer steps than pure GUI-based agents, suggesting substantial efficiency gains are possible.
“For enterprise leaders, the key lies in automating complex, multi-tool processes where full API access is a luxury, not a guarantee,” explained Ran Xu, Director of Applied AI Research at Salesforce, pointing to customer support workflows that span multiple proprietary systems as prime use cases.
University researchers release free alternative to Big Tech’s proprietary computer-use AI systems
The dominance of proprietary systems from major tech companies has prompted academic researchers to develop open alternatives. The University of Hong Kong recently released OpenCUA, an open-source framework for training computer-use agents that rivals the performance of proprietary models from OpenAI and Anthropic.
The OpenCUA system, trained on over 22,600 human task demonstrations across Windows, macOS, and Ubuntu, achieved state-of-the-art results among open-source models and performed competitively with leading commercial systems. This development could accelerate adoption by enterprises hesitant to rely on closed systems for critical automation workflows.
Anthropic’s safety testing reveals AI agents can be tricked into deleting files and stealing data
Anthropic has implemented several layers of protection for Claude for Chrome, including site-level permissions that allow users to control which websites the AI can access, mandatory confirmations before high-risk actions like making purchases or sharing personal data, and blocking access to categories like financial services and adult content.
The company’s safety improvements reduced prompt injection attack success rates from 23.6% to 11.2% in autonomous mode, though executives acknowledge this remains insufficient for widespread deployment. On browser-specific attacks involving hidden form fields and URL manipulation, new mitigations reduced the success rate from 35.7% to zero.
However, these protections may not scale to the full complexity of real-world web environments, where new attack vectors continue to emerge. The company plans to use insights from the pilot program to refine its safety systems and develop more sophisticated permission controls.
“New forms of prompt injection attacks are also constantly being developed by malicious actors,” Anthropic warned, highlighting the ongoing nature of the security challenge.
The rise of AI agents that click and type could fundamentally reshape how humans interact with computers
The convergence of multiple major AI companies around computer-controlling agents signals a significant shift in how artificial intelligence systems will interact with existing software infrastructure. Rather than requiring businesses to adopt new AI-specific tools, these systems promise to work with whatever applications companies already use.
This approach could dramatically lower the barriers to AI adoption while potentially displacing traditional automation vendors and system integrators. Companies that have invested heavily in custom integrations or RPA platforms may find their approaches obsoleted by general-purpose AI agents that can adapt to interface changes without reprogramming.
For enterprise decision-makers, the technology presents both opportunity and risk. Early adopters could gain significant competitive advantages through improved automation capabilities, but the security vulnerabilities demonstrated by companies like Anthropic suggest caution may be warranted until safety measures mature.
The limited pilot of Claude for Chrome represents just the beginning of what industry observers expect to be a rapid expansion of computer-controlling AI capabilities across the technology landscape, with implications that extend far beyond simple task automation to fundamental questions about human-computer interaction and digital security.
As Anthropic noted in its announcement: “We believe these developments will open up new possibilities for how you work with Claude, and we look forward to seeing what you’ll create.” Whether those possibilities ultimately prove beneficial or problematic may depend on how successfully the industry addresses the security challenges that have already begun to emerge.