Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
More developers than ever before are using AI tools to both assist and generate code.
While enterprise AI adoption accelerates, new data from Stack Overflow’s 2025 Developer Survey exposes a critical blind spot: the mounting technical debt created by AI tools that generate “almost right” solutions, potentially undermining the productivity gains they promise to deliver.
Stack Overflow’s annual developer survey is one of the largest such reports in any given year. In 2024 the report found that developers were not worried that AI would still their jobs. Somewhat ironically, Stack Overflow was initially negatively impacted by the growth of gen AI, with declining traffic and resulting layoffs in 2023.
The 2025 survey of over 49,000 developers across 177 countries reveals a troubling paradox in enterprise AI adoption. AI usage continues climbing—84% of developers now use or plan to use AI tools, up from 76% in 2024. Yet trust in these tools has cratered.
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
“One of the most surprising findings was a significant shift in developer preferences for AI compared to previous years, while most developers use AI, they like it less and trust it less this year,” Erin Yepis, Senior Analyst for Market Research and Insights at Stack Overflow, told VentureBeat. “This response is surprising because with all of the investment in and focus on AI in tech news, I would expect that the trust would grow as the technology gets better.”
The numbers tell the story. Only 33% of developers trust AI accuracy in 2025, down from 43% in 2024 and 42% in 2023. AI favorability dropped from 77% in 2023 to 72% in 2024 to just 60% this year.
But the survey data reveals a more urgent concern for technical decision-makers. Developers cite “AI solutions that are almost right, but not quite” as their top frustration—66% report this problem. Meanwhile, 45% say debugging AI-generated code takes more time than expected. AI tools promise productivity gains but may actually create new categories of technical debt.
The ‘almost right’ phenomenon disrupts developer workflows
AI tools don’t just produce obviously broken code. They generate plausible solutions that require significant developer intervention to become production-ready. This creates a particularly insidious productivity problem.
“AI tools seem to have a universal promise of saving time and increasing productivity, but developers are spending time addressing the unintended breakdowns in the workflow caused by AI,” Yepis explained. “Most developers say AI tools do not address complexity, only 29% believed AI tools could handle complex problems this year, down from 35% last year.”
Unlike obviously broken code that developers quickly identify and discard, “almost right” solutions demand careful analysis. Developers must understand what’s wrong and how to fix it. Many report it would be faster to write the code from scratch than to debug and correct AI-generated solutions.
The workflow disruption extends beyond individual coding tasks. The survey found 54% of developers use six or more tools to complete their jobs. This adds context-switching overhead to an already complex development process.
Enterprise governance frameworks trail behind adoption
Rapid AI adoption has outpaced enterprise governance capabilities. Organizations now face potential security and technical debt risks they haven’t fully addressed.
“Vibe coding requires a higher level of trust in the AI’s output, and sacrifices confidence and potential security concerns in the code for a faster turnaround,” Ben Matthews, Senior Director of Engineering at Stack Overflow, told VentureBeat.
Developers largely reject vibe coding for professional work, with 77% noting that it’s not part of their professional development process. Yet the survey reveals gaps in how enterprises manage AI-generated code quality.
Matthews warns that AI coding tools powered by LLMs can and do produce mistakes. He noted that while knowledgeable developers are able to identify and test vulnerable code themselves, LLMs are sometimes simply unable to even register any mistakes they may produce.
Security risks compound these quality issues. The survey data shows that when developers would still turn to humans for coding help, 61.7% cite “ethical or security concerns about code” as a key reason. This suggests that AI tools introduce integration challenges around data access, performance and security that organizations are still learning to manage.
Developers still use Stack Overflow and other human sources of expertise
Despite declining trust, developers aren’t abandoning AI tools. They’re developing more sophisticated strategies for integrating them into workflows. The survey shows 69% of developers spent time learning new coding techniques or programming languages in the past year. Of these, 44% used AI-enabled tools for learning, up from 37% in 2024.
Even with the rise of vibe coding and AI, the survey data shows that developers maintain strong connections to human expertise and community resources. Stack Overflow remains the top community platform at 84% usage. GitHub follows at 67% and YouTube at 61%. Most tellingly, 89% of developers visit Stack Overflow multiple times per month. Among these, 35% turn to the platform specifically after encountering issues with AI responses.
“Although we have seen a decline in traffic, in no way is it as dramatic as some would indicate,” Jody Bailey, Chief Product & Technology Officer, told VentureBeat.
That said, Bailey did admit that times change and the day-to-day needs of users are not the same as they were 16 years ago when Stack Overflow got started. He noted that there is not a single site or company not seeing a shift in where users come from or how they are now engaging with gen AI tools. That shift is causing Stack Overflow to critically reassess how it gauges success in the modern digital age.
“The future vitality of the internet and the broader tech ecosystem will no longer be solely defined by metrics of success outlined in the 90s or early 00s,” Bailey said. “Instead, the emphasis is increasingly on the caliber of data, the reliability of information, and the incredibly vital role of expert communities and individuals in meticulously creating, sharing and curating knowledge. “
Strategic recommendations for technical decision-makers
The Stack Overflow data suggests several key considerations for enterprise teams evaluating AI development tools.
Invest in debugging and code review capabilities: With 45% of developers reporting increased debugging time for AI code, organizations need stronger code review processes. They need debugging tools specifically designed for AI-generated solutions.
Maintain human expertise pipelines: Continued reliance on community platforms and human consultation shows that AI tools amplify rather than replace the need for experienced developers. These experts can identify and correct AI-generated code issues.
Implement staged AI adoption: Successful AI adoption requires careful integration with existing tools and processes rather than wholesale replacement of development workflows. This allows developers to leverage AI strengths while mitigating “almost right” solution risks.
Focus on AI tool literacy: Developers using AI tools daily show 88% favorability compared to 64% for weekly users. This suggests proper training and integration strategies significantly impact outcomes.
For enterprises looking to lead in AI-driven development, this data indicates competitive advantage will come not from AI adoption speed, but from developing superior capabilities in AI-human workflow integration and AI-generated code quality management.
Organizations that solve the “almost right” problem,turning AI tools into reliable productivity multipliers rather than sources of technical debt,will gain significant advantages in development speed and code quality.