People walk on the campus of Massachusetts Institute of Technology (MIT) in Cambridge, Massachussetts, on April 15, 2025. (Photo by Joseph Prezioso / AFP) (Photo by JOSEPH PREZIOSO/AFP via Getty Images)
AFP via Getty Images
Across the business world, anxiety about AI’s effects on profit margins and job markets has been building for months. This concern reached new heights with the Massachusetts Institute of Technology’s buzzy “GenAI Divide” report, which delivered a sobering statistic: 95% of generative AI pilots failed to produce meaningful results for the companies surveyed. C-suite executives are edgy, investors are increasingly skeptical, and commentators are drafting the post-AI hype narratives. Yet the more pressing question may not be about success or failure rates, but whether we are measuring enterprise AI adoption with the right metrics in the first place.
First, some context. MIT’s Project NANDA initiative conducted an ambitious study of enterprise AI adoption: 150 leader interviews, 350 employee surveys, and analysis of 300 public deployments. The conclusion is that just 5% of companies saw rapid revenue gains from AI, while the rest stalled or saw little measurable impact on profit and loss statements.
For many, this suggests AI is the latest in a long line of over-hyped business technologies. Think blockchain, virtual reality, or the dotcom crash. But look closer, and the seeds of a different narrative emerge, one far more important for the future of work and technology.
95% Failure? Pilots vs. Productivity
What MIT measured was the success rate of formal enterprise AI projects, including pilots, proofs-of-concept, or major business process overhauls. These “official” projects are the easiest to capture in an academic study or corporate press release. But the most meaningful AI impact is happening elsewhere—quietly, unofficially, and often completely invisible to the metrics that MIT tracked.
The truth is many employees are already using generative AI and chatbots on their own terms. MIT’s own research flagged this as the rise of “shadow AI,” comprising tools like ChatGPT, Copilot adopted ad hoc by staff to write emails, summarize documents, or brainstorm solutions. These tools often elude IT oversight and do not appear in any company’s official list of AI projects. Yet, they drive real productivity gains, workflow shortcuts, or creativity boosts for individual workers and teams.
According to Salesforce’s latest Slack Workforce Index in June, a survey of 5,000 workers worldwide showed that independent usage of AI tools increased 233% so far in 2025 and that 81% of respondents who used AI tools on their own were more satisfied with their job than colleagues who are not using AI. This paints a very different picture.
If we measure success solely by failed official pilots, we risk ignoring these micro-level, bottom-up transformations. The “95% failure” line becomes less of an indictment of AI itself, and more a mirror reflecting how business still imagines technology innovation: top-down, process-heavy, and focused on grand initiatives over organic, everyday utility.
The C-Suite’s AI Measurement Blind Spot
The challenge of measuring innovation is compounded by outdated metrics. Many leaders track AI by number of pilots launched, vendors on retainer, or the volume of “AI-powered” features shipped to customers. Rarely is success connected to process improvements at the team or department level, or reductions in the drudgery of back-office tasks.
The MIT report hints at this blind spot. In sectors where the biggest gains came, the return on investment was not in splashy customer-facing apps but in automating mundane administrative work, such as eliminating low-value outsourcing, streamlining workflow handoffs, or simply making individual employees faster and more effective. These changes, however, are hard to capture in standard profit and loss math and are rarely celebrated in shareholder letters.
There is also a cultural dimension to this missing measurement. ‘Shadow AI’ usage is not sanctioned from above. It is about employees quietly equipping themselves for a changing work reality. Many knowledge workers use AI tools daily without their manager’s knowledge. In fact, some companies discourage such experimentation, worried about data privacy or the lack of formal governance. This is how some of the most impactful technology shifts in history have begun: from the spreadsheet to the smartphone, it was often workers just getting it done that forced broader organizational evolution.
This is comparable to how Cloud adoption began—not with CIO mandates, but with engineering teams putting AWS on their credit cards because procurement was too slow. Today’s AI revolution follows the same pattern. Joshua Sircus, CEO of Stellar IT Solutions, told me via a email interview that “if the leadership of a company does not step in and encourage or discourage the use of AI (with monitoring), they will risk being part of the ‘wild wild west’ movement.”
The most forward-thinking organizations are finding middle ground, encouraging AI exploration while establishing guardrails against hallucinations and misuse. However, as Sircus also points out, “it’s entrepreneurial to encourage it but not all companies want entrepreneurs.” This tension between innovation and control is precisely why traditional ROI measurements miss the true value of AI in many organizations.
What Should We Really Be Measuring For AI Adoption?
Rather than counting pilots or tracking big-bang projects, the real indicators of AI success are:
How often are team members automating small, repetitive tasks?
Are knowledge workers collaborating better or producing faster results?
Are job roles evolving as people combine general-purpose AI with domain expertise?
Where are informal workflows quietly boosting business value—without a single new pilot launched?
A New Lens for AI’s Business Impact
It is time to ask whether enterprise AI is really failing, or whether we are simply failing to measure success in the right places. The next great management revolution may not be driven by headline-grabbing pilots, but by millions of small, powerful changes happening out of sight. Yes, the GenAI divide is real. But it is not about winners and losers so much as measured versus unmeasured success. Those who find ways to embrace and recognize the hidden revolution in how work really gets done will be the ones who win the AI future.
The real AI revolution is not failing—it is just happening beyond the reach of our current metrics. And that’s exactly why it has such potential to succeed: through countless trials and errors happening where we aren’t even looking.