Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI's DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser?

OpenAI launches apps inside of ChatGPT

Go with Your Gut: Scaling Confidence for Autoregressive Image Generation – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Salesforce’s new CoAct-1 write their own code to accomplish tasks

By Advanced AI EditorAugust 13, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Researchers at Salesforce and the University of Southern California have developed a new technique that gives computer-use agents the ability to execute code while navigating graphical user interfaces (GUIs), that is, writing scripts while also moving a cursor and/or clicking buttons on an application, combining the best of both approaches to speed up workflows and reduce errors.

This hybrid approach allows an agent to bypass brittle and inefficient mouse clicks for tasks that can be better accomplished through coding.

The system, called CoAct-1, sets a new state-of-the-art on key agent benchmarks, outperforming other methods while requiring significantly fewer steps to accomplish complex tasks on a computer.

This upgrade can pave the way for more robust and scalable agent automation with significant potential for real-world applications.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

The fragility of point-and-click AI agents

Computer use agents typically rely on vision-language and vision-language-action models (VLMs or VLAs) to perceive a screen and take action, mimicking how a person uses a mouse and keyboard.

While these GUI-based agents can perform a variety of tasks, they often falter when faced with long, complex workflows, especially in applications with dense menus and options, like office productivity suites.

For example, a task that involves locating a specific table in a spreadsheet, filtering it, and saving it as a new file can involve a long and precise sequence of GUI manipulations.

This is where brittleness creeps in. “In these scenarios, existing agents frequently struggle with visual grounding ambiguity (e.g., distinguishing between visually similar icons or menu items) and the accumulated probability of making any single error over the long horizon,” the researchers write in their paper. “A single mis-click or misunderstood UI element can derail the entire task.”

To address these challenges, many researchers have focused on augmenting GUI agents with high-level planners.

These systems use powerful reasoning models like OpenAI’s o3 to decompose a user’s high-level goal into a sequence of smaller, more manageable subtasks.

While this structured approach improves performance, it doesn’t solve the problem of navigating menus and clicking buttons, even for operations that could be done more directly and reliably with a few lines of code.

CoAct-1: A multi-agent team for computer tasks

To solve these limitations, the researchers created CoAct-1 (Computer-using Agent with Coding as Actions), a system designed to “combine the intuitive, human-like strengths of GUI manipulation with the precision, reliability, and efficiency of direct system interaction through code.”

The system is structured as a team of three specialized agents that work together: an Orchestrator, a Programmer, and a GUI Operator.

CoAct-1 framework (source: arXiv)

The Orchestrator acts as the central planner or project manager. It analyzes the user’s overall goal, breaks it down into subtasks, and assigns each subtask to the best agent for the job. It can delegate backend operations like file management or data processing to the Programmer, which writes and executes Python or Bash scripts.

For frontend tasks that require clicking buttons or navigating visual interfaces, it turns to the GUI Operator, a VLM-based agent.

“This dynamic delegation allows CoAct-1 to strategically bypass inefficient GUI sequences in favor of robust, single-shot code execution where appropriate, while still leveraging visual interaction for tasks where it is indispensable,” the paper states.

The workflow is iterative. After the Programmer or GUI Operator completes a subtask, it sends a summary and a screenshot of the current system state back to the Orchestrator, which then decides the next step or concludes the task.

The Programmer agent uses an LLM to generate its code and sends commands to a code interpreter to test and refine its code over multiple rounds.

Similarly, the GUI Operator uses an action interpreter that executes its commands (e.g., mouse clicks, typing) and returns the resulting screenshot, allowing it to see the outcome of its actions. The Orchestrator makes the final decision on whether the task should continue or stop.

Example of CoAct-1 in action (source: arXiv)

A more efficient path to automation

The researchers tested CoAct-1 on OSWorld, a comprehensive benchmark that includes 369 real-world tasks across browsers, IDEs, and office applications.

The results show CoAct-1 establishes a new state-of-the-art, achieving a success rate of 60.76%.

The performance gains were most significant in categories where programmatic control offers a clear advantage, such as OS-level tasks and multi-application workflows.

For instance, consider an OS-level task like finding all image files within a complex folder structure, resizing them, and then compressing the entire directory into a single archive.

A purely GUI-based agent would need to perform a long, brittle sequence of clicks and drags, opening folders, selecting files, and navigating menus, with a high chance of error at each step.

CoAct-1, by contrast, can delegate this entire workflow to its Programmer agent, which can accomplish the task with a single, robust script.

Beyond just a higher success rate, the system is dramatically more efficient. CoAct-1 solves tasks in an average of just 10.15 steps, a stark contrast to the 15.22 steps required by leading GUI-only agents like GTA-1.

While other agents like OpenAI’s CUA 4o averaged fewer steps, their overall success rate was much lower, indicating CoAct-1’s efficiency is coupled with greater effectiveness.

The researchers found a clear trend: tasks that require more actions are more likely to fail. Reducing the number of steps not only speeds up task completion but, more importantly, minimizes the opportunities for error.

Therefore, finding ways to compress multiple GUI steps into a single programmatic task can make the process both more efficient and less error-prone.

As the researchers conclude, “This efficiency underscores the potential of our approach to pave a more robust and scalable path toward generalized computer automation.”

CoAct-1 performs tasks with fewer steps on average thanks to smart use of coding (source: arXiv)

From the lab to the enterprise workflow

The potential for this technology goes beyond general productivity. For enterprise leaders, the key lies in automating complex, multi-tool processes where full API access is a luxury, not a guarantee.

Ran Xu, a co-author of the paper and Director of Applied AI Research at Salesforce, points to customer support as a prime example.

“A service support agent uses many different tools — general tools such as Salesforce, industry-specific tools such as EPIC for healthcare, and a lot of customized tools — to investigate a customer request and formulate a response,” Xu told VentureBeat. “Some of the tools have API access while others don’t. It is a perfect use case that could potentially benefit from our technology: a compute-use agent that leverages whatever is available from the computer, whether it’s an API, code, or just the screen.”

Xu also sees high-value applications in sales, such as prospecting at scale and automating bookkeeping, and in marketing for tasks like customer segmentation and campaign asset generation.

Navigating real-world challenges and the need for human oversight

While the results on the OSWorld benchmark are strong, enterprise environments are far messier, filled with legacy software and unpredictable UIs.

This raises critical questions about robustness, security, and the need for human oversight.

A core challenge is ensuring the Orchestrator agent makes the right choice when faced with an unfamiliar application. According to Xu, the path to making agents like CoAct-1 robust for custom enterprise software involves training them with feedback in realistic, simulated environments.

The goal is to create a system where the “agent could observe how human agents work, get trained within a sandbox, and when it goes live, continue to solve tasks under the guidance and guardrail of a human agent.”

The ability for the Programmer agent to execute its own code also introduces obvious security concerns. What stops the agent from executing harmful code based on an ambiguous user request?

Xu confirms that robust containment is essential. “Access control and sandboxing is the key,” he said, emphasizing that a human must “understand the implication and give the AI access for safety.”

Sandboxing and guardrails will be critical to validating agent behavior before deployment on critical systems.

Ultimately, for the foreseeable future, overcoming ambiguity will likely require a human-in-the-loop. When asked about handling vague user queries, a concern also raised in the paper, Xu suggested a phased approach. “I see human-in-the-loop to start,” he noted.

While some tasks may eventually become fully autonomous, for high-stakes operations, human validation will remain crucial. “Some mission-critical ones may always need human approval.”

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleUber Freight CEO Lior Ron leaves to join self-driving startup Waabi as COO
Next Article IBM, Google, Amazon, Microsoft Lock Horns In Billion-Dollar Race to Build Million-Qubit Quantum Computers And End A 40-Year Tech Dream – Amazon.com (NASDAQ:AMZN)
Advanced AI Editor
  • Website

Related Posts

OpenAI's DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser?

October 7, 2025

From Silicon Valley to Nairobi: What the Global South’s AI leapfrogging teaches tech leaders

October 7, 2025

OpenAI unveils AgentKit that lets developers drag and drop to build AI agents

October 6, 2025

Comments are closed.

Latest Posts

Tomb of Amenhotep III Reopens After Two-Decade Renovation    

Limited Edition Print of Ozzy Osbourne Art Sold To Benefit Charities

Odili Donald Odita Sues Jack Shainman Gallery over ‘Withheld’ Artworks

Mohamed Hamidi, Moroccan Modernist Painter, Has Died at 84

Latest Posts

OpenAI's DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser?

October 7, 2025

OpenAI launches apps inside of ChatGPT

October 7, 2025

Go with Your Gut: Scaling Confidence for Autoregressive Image Generation – Takara TLDR

October 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI's DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser?
  • OpenAI launches apps inside of ChatGPT
  • Go with Your Gut: Scaling Confidence for Autoregressive Image Generation – Takara TLDR
  • OpenAI declares ‘huge focus’ on enterprise growth with array of partnerships
  • From Silicon Valley to Nairobi: What the Global South’s AI leapfrogging teaches tech leaders

Recent Comments

  1. Minimexer4Nalay on Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems
  2. Minimexer4Nalay on IBM’s New Quantum Roadmap Brings the Bitcoin Threat Closer
  3. Minimexer4Nalay on Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP
  4. Leigh Bazinet on Tech Layoffs Remain Stubbornly High, With Big Tech Leading The Way
  5. Minimexer4Nalay on Cisco automates AI-driven security across enterprise networks

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.