Fabio Caversan is Vice President of Digital Business and Innovation at Stefanini, driving new product offerings and digital transformation.
In January 2025, the Chinese startup DeepSeek released R1, a sophisticated, open-source reasoning model. In a field dominated by proprietary models from companies like OpenAI and Google, R1 quickly gained traction as an accessible alternative.
The implications for enterprise AI are significant. Until recently, most leading systems were only available through closed APIs or expensive licensing agreements. With its open-source approach, DeepSeek broadened access to cutting-edge AI capabilities while enabling organizations to better understand, audit and customize the systems they deploy.
Efficiency, Energy And Enterprise Impact
The market was quick to respond to R1’s surprise debut. Within days, OpenAI and Google had announced new, lower pricing structures, and Microsoft began testing deployments through Azure.
However, despite the competitive threat, some industry leaders saw the launch as a step forward. Meta’s chief AI scientist, Yann LeCun, praised DeepSeek for accelerating the push toward open-source AI. Meanwhile, Microsoft CEO Satya Nadella called the development “good news,” arguing that increased access drives broader adoption.
The launch of R1 also brought benefits for companies focused on energy consumption. Historically, running AI models on enterprise infrastructure has required tremendous energy, so much so that in 2024, Microsoft announced plans to revive the Three Mile Island nuclear power plant in Pennsylvania to supply its data centers.
By enabling high-output performance on even mid-tier machines, the R1 model allows organizations to scale AI capabilities without the major infrastructure or energy costs typically associated with AI operations.
A Model That Does More With Less
With R1, high-performance models are showing up in places they couldn’t before—on modest infrastructure, under tighter budgets and in organizations previously priced out of advanced AI solutions entirely.
Key strategic advantages include:
• Flexible Implementation Without Cloud Dependency: DeepSeek can be deployed and tested on local infrastructure. That reduces reliance on third-party APIs and provides more direct control over how systems are built and managed.
• Lower Total Cost Of Ownership: Because it’s open source and runs on modest hardware, DeepSeek reduces costs associated with licensing fees and infrastructure.
• Stronger Data Governance And Regulatory Fit: On-premises deployment gives organizations more control over data handling, making it easier to meet internal policies and regional privacy laws.
• Efficient Performance With Less Energy Draw: R1’s architecture allows for advanced capabilities without the heavy energy draw typically associated with large-scale AI.
• Enhanced Market Agility: Teams that adopt open-source models early will be able to move quickly and test new ideas in-house.
Auditability And Assurance
DeepSeek’s open-source architecture provides enterprises with transparency. As Grammarly CEO Rahul Roy-Chowdhury argued in an article for the World Economic Forum, transparency is a foundational strength of open-source systems. Because the underlying code and model weights are publicly available, organizations can audit and adapt open-source technology to meet their own security and ethical standards.
Barriers To Adoption
Despite these strengths, DeepSeek hasn’t yet reached mainstream enterprise adoption. Running a state-of-the-art open-source AI model on-premises requires expertise across DevOps, machine learning (ML) operations and AI. Many organizations lack that level of in-house capability.
Geopolitical tensions also muddy the waters. Because DeepSeek is headquartered in China, some organizations remain cautious. These organizations will need visible, ongoing assurance of data security, regulatory alignment and long-term technological autonomy to overcome this hesitation. Beyond the technology, companies need to understand how well a system runs, how easily it will integrate with existing workflows and whether it will introduce any compliance risks.
The Next Step For Enterprise AI
Winning in the next era of enterprise AI will require trust, agility and the ability to meet businesses where they are. As an open-source project, DeepSeek is in a position to outperform competitors in priority areas such as transparency and cost efficiency.
However, any provider looking to compete for enterprise adoption will need to invest in six key areas:
• Explainability And Fairness: For AI decisions to be trusted, especially in scenarios where they impact people, they need to be explainable and fair. Providers should build out or integrate interpretation tools, support external audits and share bias metrics. Clear documentation and audit pathways must be part of any enterprise offering.
• Scaling Open Source And Community Trust: Open-source projects succeed when they’re backed by active, well-supported communities. For providers, that means investing in developer experience, strong documentation and ongoing engagement to keep users and contributors connected to their core team.
• Security And Adversarial Risks: Wider deployment will make large AI models more attractive to attackers. Providers should implement “security by design” across the stack, run third-party audits and red team exercises, maintain rapid patch cycles and give self-hosted users detailed, actionable security guidance.
• Interoperability And Integration: Mainstream enterprise adoption will depend on seamless compatibility with legacy, cloud and hybrid IT environments. Providers should prioritize a mature SDK/API layer, build plug-ins for top enterprise platforms (such as Microsoft and Salesforce) and offer onboarding materials and “solution blueprints” for common enterprise use cases.
• Enterprise Support And Sustainability: For mainstream adoption, open source alone isn’t enough. Enterprises need support contracts, SLAs and deployment options that fit their infrastructure. Providers should build or enable commercial packages that give companies a choice between total self-hosting and managed or fully supported deployments.
• Continuous Innovation And Talent Retention: Falling behind on model quality or deployment features kills momentum quickly. Providers need strong internal R&D, active collaboration with outside researchers and a culture that prioritizes open peer review and innovation.
Conclusion
The release of R1 has shown that companies can deploy sophisticated AI with more speed and confidence than ever before. However, delivering a technically strong model is only part of the equation. For now, DeepSeek offers a rare combination of performance, flexibility and autonomy, and that puts it ahead of the curve. Whether it will stay there will depend on how quickly it can operationalize support and security at scale.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?