Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Tesla to integrate Deepseek, Doubao AI voice controls in China, ETBrandEquity

LLaSO: A Foundational Framework for Reproducible Research in Large Language and Speech Model – Takara TLDR

How DeepSeek’s latest innovation boosts China’s AI self-sufficiency

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

By Advanced AI EditorAugust 23, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

A new framework from researchers at The University of Hong Kong (HKU) and collaborating institutions provides an open source foundation for creating robust AI agents that can operate computers. The framework, called OpenCUA, includes the tools, data, and recipes for scaling the development of computer-use agents (CUAs).

Models trained using this framework perform strongly on CUA benchmarks, outperforming existing open source models and competing closely with closed agents from leading AI labs like OpenAI and Anthropic.

The challenge of building computer-use agents

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private.

“As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

At the same time, open source efforts face their own set of hurdles. There has been no scalable infrastructure for collecting the diverse, large-scale data needed to train these agents. Existing open source datasets for graphical user interfaces (GUIs) have limited data, and many research projects provide insufficient detail about their methods, making it difficult for others to replicate their work.

According to the paper, “These limitations collectively hinder advances in general-purpose CUAs and restrict a meaningful exploration of their scalability, generalizability, and potential learning approaches.”

Introducing OpenCUA

OpenCUA framework Source: XLANG Lab at HKU

OpenCUA is an open source framework designed to address these challenges by scaling both the data collection and the models themselves. At its core is the AgentNet Tool for recording human demonstrations of computer tasks on different operating systems.

The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements. This raw data is then processed into “state-action trajectories,” pairing a screenshot of the computer (the state) with the user’s corresponding action (a click, key press, etc.). Annotators can then review, edit, and submit these demonstrations.

AgentNet tool Source: XLang Lab at HKU

Using this tool, the researchers collected the AgentNet dataset, which contains over 22,600 task demonstrations across Windows, macOS, and Ubuntu, spanning more than 200 applications and websites. “This dataset authentically captures the complexity of human behaviors and environmental dynamics from users’ personal computing environments,” the paper notes.

Recognizing that screen-recording tools raise significant data privacy concerns for enterprises, the researchers designed the AgentNet Tool with security in mind. Xinyuan Wang, co-author of the paper and PhD student at HKU, explained that they implemented a multi-layer privacy protection framework. “First, annotators themselves can fully observe the data they generate… before deciding whether to submit it,” he told VentureBeat. The data then undergoes manual verification for privacy issues and automated scanning by a large model to detect any remaining sensitive content before release. “This layered process ensures enterprise-grade robustness for environments handling sensitive customer or financial data,” Wang added.

To accelerate evaluation, the team also curated AgentNetBench, an offline benchmark that provides multiple correct actions for each step, offering a more efficient way to measure an agent’s performance.

A new recipe for training agents

The OpenCUA framework introduces a novel pipeline for processing data and training computer-use agents. The first step converts the raw human demonstrations into clean state-action pairs suitable for training vision-language models (VLMs). However, the researchers found that simply training models on these pairs yields limited performance gains, even with large amounts of data.

OpenCUA chain-of-thought pipeline Source: XLang Lab at HKU

The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.

“We find natural language reasoning crucial for generalizable computer-use foundation models, helping CUAs internalize cognitive capabilities,” the researchers write.

This data synthesis pipeline is a general framework that can be adapted by companies to train agents on their own unique internal tools. According to Wang, an enterprise can record demonstrations of its proprietary workflows and use the same “reflector” and “generator” pipeline to create the necessary training data. “This allows them to bootstrap a high-performing agent tailored to their internal tools without needing to handcraft reasoning traces manually,” he explained.

Putting OpenCUA to the test

The researchers applied the OpenCUA framework to train a range of open source VLMs, including variants of Qwen and Kimi-VL, with parameter sizes from 3 billion to 32 billion. The models were evaluated on a suite of online and offline benchmarks that test their ability to perform tasks and understand GUIs.

The 32-billion-parameter model, OpenCUA-32B, established a new state-of-the-art success rate among open source models on the OSWorld-Verified benchmark. It also surpassed OpenAI’s GPT-4o-based CUA and significantly closed the performance gap with Anthropic’s leading proprietary models.

OpenCUA shows massive improvement over base models (left) while competing with leading CUA models (right) Source: XLANG Lab at HKU

For enterprise developers and product leaders, the research offers several key findings. The OpenCUA method is broadly applicable, improving performance on models with different architectures (both dense and mixture-of-experts) and sizes. The trained agents also show strong generalization, performing well across a diverse range of tasks and operating systems.

According to Wang, the framework is particularly suited for automating repetitive, labor-intensive enterprise workflows. “For example, in the AgentNet dataset, we already capture a few demonstrations of launching EC2 instances on Amazon AWS and configuring annotation parameters on MTurk,” he told VentureBeat. “These tasks involve many sequential steps but follow repeatable patterns.”

However, Wang noted that bridging the gap to live deployment requires addressing key challenges around safety and reliability. “The biggest challenge in real deployment is safety and reliability: the agent must avoid mistakes that could inadvertently alter system settings or trigger harmful side effects beyond the intended task,” he said.

The researchers have released the code, dataset, and weights for their models.

As open source agents built on frameworks like OpenCUA become more capable, they could fundamentally evolve the relationship between knowledge workers and their computers. Wang envisions a future where proficiency in complex software becomes less important than the ability to clearly articulate goals to an AI agent.

He described two primary modes of work: “offline automation, where the agent leverages its broader software knowledge to pursue a task end-to-end,” and “online collaboration, where the agent responds in real-time and works side by side with the human, much like a colleague.” Basically, the humans will provide the strategic “what,” while increasingly sophisticated AI agents handle the operational “how.”

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMoveworks Uses Language-Based AI to Process IT Tickets Faster
Next Article A Game Changer for Loyalty and Profitability
Advanced AI Editor
  • Website

Related Posts

Four big enterprise lessons from Walmart’s AI security: agentic risks, identity reboot, velocity with governance and AI vs. AI defense

August 23, 2025

MCP-Universe benchmark shows GPT-5 fails more than half of real-world orchestration tasks

August 23, 2025

Meta is partnering with Midjourney and will license its technology for ‘future models and products’

August 22, 2025

Comments are closed.

Latest Posts

Mütter Museum in Philadelphia Announces New Policy for Human Remains

Inigo Philbrick, Art Dealer Convicted of Fraud, Appears in BBC Film

Links for August 22, 2025

White House Targets Specific Artworks at Smithsonian Museums

Latest Posts

Tesla to integrate Deepseek, Doubao AI voice controls in China, ETBrandEquity

August 23, 2025

LLaSO: A Foundational Framework for Reproducible Research in Large Language and Speech Model – Takara TLDR

August 23, 2025

How DeepSeek’s latest innovation boosts China’s AI self-sufficiency

August 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Tesla to integrate Deepseek, Doubao AI voice controls in China, ETBrandEquity
  • LLaSO: A Foundational Framework for Reproducible Research in Large Language and Speech Model – Takara TLDR
  • How DeepSeek’s latest innovation boosts China’s AI self-sufficiency
  • MIT report on AI ROI spooks Wall Street; 95% of implementations fail to boost profits
  • Tesla’s EVs in China now feature DeepSeek’s AI chatbot

Recent Comments

  1. https://dzone.com/users/5386704/pin-up-azerbaijan.html on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. jupiter swap apk download on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Richardsip on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. فرق دبیری با فرهنگیان on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Accu Trader AI on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.