Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

GyroSwin: 5D Surrogates for Gyrokinetic Plasma Turbulence Simulations – Takara TLDR

OpenAI Will Stop Saving Users’ Deleted Posts

Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

What It Means to Have AI Coworkers

By Advanced AI EditorOctober 12, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Some people who work in the tech industry say they already have AI coworkers. These are AI agents who they communicate with on Slack that can work autonomously for an extended period of time on tasks including software coding and sales outreach.

To better understand how that actually works and their future use in other industries, we spoke with Amjad Masad, founder and CEO of Replit, a company that makes AI agents and AI coding tools. Here are excerpts from the conversation, which took place on the sidelines of Bloomberg’s Going to Work event in Baltimore this past week, edited for space and clarity:

Are AI coworkers here? And should more people expect to have them in their organizations?

Yes. They’re certainly here increasingly, as of maybe January, February, when tech companies began embracing AI agents.

I would differentiate AI agents from AI copilots. With AI copilots, you have a chat bot that’s sitting there and you’re chatting with it, taking chunks of work, and it’s a one-shot type of relationship. Whereas AI agents can work for an extended period of time without monitoring and can call a bunch of tools, can access a lot of different databases and knowledge, can do deep research, and then they determine their halting condition when they feel like they’re done or they couldn’t get the thing done and then come back to you. I would say we only got there about January, February of this year.

There’s a [nonprofit] called METR that put out a paper that talked about how long an AI agent can run unsupervised, and they were making the case that every seven months it is doubling. At the time it was like we were at five minutes, then 10 minutes. But they totally underestimated how fast it was going to go. I would say last year, two or three minutes was the max. Replit Agent 1 could run for two minutes unsupervised before it went off the rails and the context window filled up and it just couldn’t stay coherent. In February, it was like 20 minutes. Now our AI agent can run three hours doing actual useful work that will often be largely correct. And so it is not doubling. It is 10xing every few months.

By next year, you’ll be able to give AI agents chunks of work that will take a day or two to get done.

What is an example of the work that AI agents or AI coworkers are doing? People talk about, well, they could book your travel or things like that…

This [booking travel] actually turns out to be a hard problem because consumer problems are actually harder because they are more decentralized. It needs to use a lot of different tools that it’s not trained on.

Software engineering is very, very clear, and for many reasons, software engineering is the one that companies are focused on. It has clear value. You can create reinforcement learning environments where the agents are learning very, very quickly because you could just give them a virtual machine, give them a verifiable goal, and they can learn. So we’re making a lot of progress there.

But there are a lot of other things that are similar to software engineering that are coming down the line. Support tickets is one that is happening very quickly. Support agents are getting deployed. Our support team would have been 10x larger in prior eras given the amount of customers that we have.

You’re going to start to see it in sales development representatives (SDR) and a lot of go-to market-type (GTM) roles. It is essentially a deep research agent. It’s qualifying leads, it’s writing emails, doing outreach, scheduling calendar events for the sales team.

The experience of working in a tech company right now is that you have in your Slack AI agents like Replit, Cursor, whatever. You can message @cursor, ‘Create this PR,’ and it can go work for an hour or two and create a pull request that otherwise you would have given to a junior engineer or an intern. So a lot of people have that experience of being able to be on Slack and talking to an AI agent like they would talk to a human. Software engineering for many reasons is way ahead of the curve, but we’re going to start seeing it in other areas.

What are the implications for human work if you have AI coworkers? In terms of what human workers spend their time on, in terms of the nature and quantity of jobs, and the dynamics within organizations…

For so long, the entire economy has been bottlenecked by software engineers. We need a lot more software. So it is hard to see it actually create displacement within engineering because AI agents are happening. We need more software engineers to manage more AI agents. There’s still more demand for software engineers. But not every aspect of the economy is like that. Not every job and role is like that. I don’t need infinite support reps. I need enough support reps to answer the customer. The more AI agents can answer successfully, the fewer support reps that we need. I would expect over the next year, 18 months, for support as a role to start really getting affected, QA [quality assurance] as a role to start being truly affected.

My optimistic take is that there’s going to be less specialization, perhaps counterintuitively. Starting from the industrial revolution, we went into extreme specialization and that led to the Marxist theory of alienation. I make only one part of the pencil, put it down the factory pipeline, it goes to someone else. Right now, because people have access to these AI agents, especially entrepreneurs can do the marketing, the sales, and the engineering all by themselves. You can see these companies that are making millions of dollars where it’s like one or two or three people. Even Replit—when we got to our $150 million annual current revenue, we were like 70 people. SaaS companies that were getting to that scale 10 years ago were 700 people. So there’s a factor of 10x right now where companies are potentially 10x smaller.

What that means is I would rather hire very smart generalists that can manage more AI agents. What kind of characteristics am I looking for? I’m looking for someone who’s a clear thinker and a clear communicator. Just being able to break down the ideas and give them to the AI requires clear communication, someone who’s organized and can do more work across the board, someone who understands the business problems. It benefits the generalists, the manager. The consultant types are actually very high leverage right now because they are fundamentally generalists. It disadvantages the hyper-specialized person in the enterprise.

When you have AI agents doing things like support and basic coding, they’re potentially replacing jobs that people do early in their careers. Do you see that as an issue? And do you see any solutions?

Yes, it is very much an issue. If I am a software engineering manager at Meta, do I hire four junior engineers that I have to manage and they come in with all the overhead people come in with? Or do I hire one senior software engineer that can spin up 10 agents at a time?

It’s very obvious that I’m going to go with the senior engineer. So the salaries for senior engineers have never been higher. You hear that anecdotally—I’m not sure we’re seeing this in the data yet—a lot of new grads are struggling to find a job.

That being said, there are new grads who are very good at using AI. They’ve been using AI for four years now, and we hire some of those people. We hired an 18-year-old kid, for example, who’s very good at coding with AI, learned how to code using AI. So that’s the counterpoint, which is he didn’t go to computer science school to get classical training in computer science and programming. He learned on his own how to be very, very proficient with AI.

This gives you a sign of where education should be going. It should be more practical and more on-the-job training and more about how to work with AIs. Perhaps counterintuitively, I think the soft skills become more important than the hard skills. I don’t need them to know how assembly language works. I would rather see them be very generative in terms of ideas, be able to generate a lot of ideas and able to communicate clearly those ideas.

Read our research playbook on the new practices of leadership for the age of AI.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLearning on the Job: An Experience-Driven Self-Evolving Agent for Long-Horizon Tasks – Takara TLDR
Next Article MIT president rejects proposal tying funding to Trump’s political agenda
Advanced AI Editor
  • Website

Related Posts

Great Customer Service Will Be People And Bots Working Together

October 12, 2025

I’m fed up of AI chatbots replacing customer service

October 12, 2025

New Legislation Is Likely To Drive AI Adoption Rather Than Create Jobs

October 12, 2025

Comments are closed.

Latest Posts

Smithsonian Closes Museums Amid Government Shutdown

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Latest Posts

GyroSwin: 5D Surrogates for Gyrokinetic Plasma Turbulence Simulations – Takara TLDR

October 13, 2025

OpenAI Will Stop Saving Users’ Deleted Posts

October 13, 2025

Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs – Takara TLDR

October 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • GyroSwin: 5D Surrogates for Gyrokinetic Plasma Turbulence Simulations – Takara TLDR
  • OpenAI Will Stop Saving Users’ Deleted Posts
  • Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs – Takara TLDR
  • Reflection AI lands $2B at $8B valuation to expand frontier AI infrastructure and safety research
  • Here's what's slowing down your AI strategy — and how to fix it

Recent Comments

  1. Renged on OpenAI’s GPT-4 might be coming to an end. Here’s why that’s actually good news
  2. TribalMaskU9Nalay on Innovaccer Rakes In $275M, Kicking Off What Will Likely Be Another Hot Year for AI Funding
  3. TribalMaskU9Nalay on Mistral Releases Its Own Coding Assistant Mistral Code
  4. TribalMaskU9Nalay on United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool
  5. Elinor Furey on Mistral AI releases enterprise coding tool

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.