
(MR Neon/Shutterstock)
In a recent report, MIT’s NANDA team has uncovered a workplace trend where employee use of generative AI is widespread, but official enterprise deployment remains more limited. In “The GenAI Divide: State of AI in Business 2025,” the authors found that workers are using personal AI tools to do parts of their jobs, often without IT approval. In this “shadow AI economy,” as the report calls it, employees at a large share of firms use tools like ChatGPT or Claude for routine drafting, research, and analysis, even when their employers have not purchased a license.
The report’s headline statistic says that 95% of enterprise generative AI pilots are failing, with little to no measurable P&L impact. Official generative AI initiatives often stall due to how LLMs do not have the memory capacity to learn from workflows. But these tools are flexible enough for individual use, the report says, noting that this shadow AI often delivers better ROI than formal initiatives.
“Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels,” the authors say. “Our research uncovered a thriving ‘shadow AI economy’ where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.”

(Source: Shutterstock)
Only 40% of companies reported purchasing official LLM subscriptions, yet workers from over 90% of companies surveyed said they regularly use personal AI tools for work tasks. In many firms, shadow users report turning to LLMs several times a day while official programs remain in pilot. This pattern suggests that individuals can clear the adoption gap when they have flexible, responsive tools. Some organizations are now studying this shadow use, measuring where it helps, and using those findings to guide purchases of approved company tools. That approach could start to close the gap between personal use and formal deployment.
The report ties the GenAI Divide to the reality that LLMs often forget context, do not learn, and do not adapt. For mission-critical work, 90% of users still prefer humans. AI is preferred for quick tasks like email drafts (70%) and basic analysis (65%), while humans lead for complex or long-term work by roughly nine to one. Interviews conducted for the report echo this. Professionals value AI for brainstorming and first drafts but say it repeats errors and requires fresh context each time.
For jobs, the effect of shadow AI is a quiet shift in who does what. Employees already use personal tools to expedite routine work. The opening for firms is to bring that use into the light and make it safe, measurable, and tied to actual workflows. That means training, clear rules, and procurement, and using systems with memory, audit trails, and integration with existing processes. With proper governance, shadow activity could become sanctioned practice. Until enterprise tools learn and adapt, the practical gains will come from individual AI use and small workflow changes. Access the report at this link.