In the race to automate everything – from customer service to code – AI is being heralded as a silver bullet. The narrative is seductive: AI tools that can write entire applications, streamline engineering teams and reduce the need for expensive human developers, along with hundreds of other jobs.
But from my point of view as a technologist who spends every day inside real companies’ data and workflows, the hype doesn’t match up with the reality.
I’ve worked with industry leaders like General Electric, The Walt Disney Company and Harvard Medical School to optimize their data and AI infrastructure, and here’s what I’ve learned: Replacing humans with AI in most jobs is still just an idea on the horizon.
I worry that we're thinking too far ahead. In the past two years, more than a quarter of programming jobs have vanished. Mark Zuckerberg announced he is planning to replace many of Meta’s coders with AI.
But, intriguingly, both Bill Gates and Sam Altman have publicly warned against replacing coders.
Right now, we shouldn’t count on AI tools to successfully replace jobs in tech or business. That’s because what AI knows is inherently limited by what it has seen – and most of what it has seen in the tech world is boilerplate.
Generative AI models are trained on large datasets, which typically fall into two main categories: publicly available data (from the open internet), or proprietary or licensed data (created in-house by the organization, or purchased from third parties).
Simple tasks, like building a basic website or configuring a template app, are easy wins for generative models. But when it comes to writing the sophisticated, proprietary infrastructure code that powers companies like Google or Stripe, there’s a problem: That code doesn’t exist in public repositories. It’s locked away inside the walls of corporations, inaccessible to training data and often written by engineers with decades of experience.
Right now, AI can’t reason on its own yet. And it doesn’t have instincts. It’s just mimicking patterns. A friend of mine in the tech world once described large language models (LLMs) as a "really good guesser."
Think of AI today as a junior team member — helpful for a first draft or simple projects. But like any junior, it requires oversight. In programming, for example, while I’ve found a 5X improvement for simple coding, I’ve found that reviewing and correcting more complicated AI-produced code often takes more time and energy than writing the code myself.
You still need senior professionals with deep experience to find the flaws, and to understand the nuances of how those flaws might pose a risk six months from now.
That’s not to say AI shouldn’t have a place in the workplace. But the dream of replacing entire teams of programmers or accountants or marketers with one human and a host of AI tools is far premature. We still need senior-level people in these jobs, and we need to train people in junior-level jobs to be technically capable enough to assume the more complex roles one day.
The goal of AI in tech and business shouldn’t be about removing humans from the loop. I’m not saying this because I’m scared AI will take my job. I’m saying it because I’ve seen how dangerous trusting AI too much at this stage can be.
Business leaders, no matter what industry they’re in, should be aware: While AI promises cost savings and smaller teams, these efficiency gains could backfire. You might trust AI to perform more junior levels of work, but not to complete more sophisticated projects.
AI is fast. Humans are smart. There’s a big difference. The sooner we shift the conversation from replacing humans to reinforcing them, the more we’ll reap the benefits of AI.
Derek Chang is founding partner of Stratus Data.