Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

S4 Capital’s Monks Showcases Strategic Moves with Tech Innovators to Supercharge AI-Powered Creativity

Alibaba’s AI Model Outperforms Radiologists In Early Gastric Cancer Detection – Alibaba Gr Hldgs (NYSE:BABA)

The Download: Google DeepMind’s DNA AI, and heatwaves’ impact on the grid

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
VentureBeat AI

Why your enterprise AI strategy needs both open and closed models: The TCO reality check

Advanced AI EditorBy Advanced AI EditorJune 27, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

For the last two decades, enterprises have had a choice between open-source and closed proprietary technologies.

The original choice for enterprises was primarily centered on operating systems, with Linux offering an open-source alternative to Microsoft Windows. In the developer realm, open-source languages like Python and JavaScript dominate, as open-source technologies, including Kubernetes, are standards in the cloud.

The same type of choice between open and closed is now facing enterprises for AI, with multiple options for both types of models. On the proprietary closed-model front are some of the biggest, most widely used models on the planet, including those from OpenAI and Anthropic. On the open-source side are models like Meta’s Llama, IBM Granite, Alibaba’s Qwen and DeepSeek.

Understanding when to use an open or closed model is a critical choice for enterprise AI decision-makers in 2025 and beyond. The choice has both financial and customization implications for either options that enterprises need to understand and consider.

Understanding the difference between open and closed licenses

There is no shortage of hyperbole around the decades-old rivalry between open and closed licenses. But what does it all actually mean for enterprise users?

A closed-source proprietary technology, like OpenAI’s GPT 4o for example, does not have model code, training data, or model weights open or available for anyone to see. The model is not easily available to be fine-tuned and generally speaking, it is only available for real enterprise usage with a cost (sure, ChatGPT has a free tier, but that’s not going to cut it for a real enterprise workload).

An open technology, like Meta Llama, IBM Granite, or DeepSeek, has openly available code. Enterprises can use the models freely, generally without restrictions, including fine-tuning and customizations.

Rohan Gupta, a principal with Deloitte, told VentureBeat that the open vs. closed source debate isn’t unique or native to AI, nor is it likely to be resolved anytime soon. 

Gupta explained that closed source providers typically offer several wrappers around their model that enable ease of use, simplified scaling, more seamless upgrades and downgrades and a steady stream of enhancements. They also provide significant developer support. That includes documentation as well as hands-on advice and often delivers tighter integrations with both infrastructure and applications. In exchange, an enterprise pays a premium for these services.

 “Open-source models, on the other hand, can provide greater control, flexibility and customization options, and are supported by a vibrant, enthusiastic developer ecosystem,” Gupta said. “These models are increasingly accessible via fully managed APIs across cloud vendors, broadening their distribution.”

Making the choice between open and closed model for enterprise AI

The question that many enterprise users might ask is what’s better: an open or a closed model? The answer however is not necessarily one or the other.

“We don’t view this as a binary choice,” David Guarrera, Generative AI Leader at EY Americas, told VentureBeat. ” Open vs closed is increasingly a fluid design space, where models are selected, or even automatically orchestrated, based on tradeoffs between accuracy, latency, cost, interpretability and security at different points in a workflow.” 

Guarrera noted that closed models limit how deeply organizations can optimize or adapt behavior. Proprietary model vendors often restrict fine-tuning, charge premium rates, or hide the process in black boxes. While API-based tools simplify integration, they abstract away much of the control, making it harder to build highly specific or interpretable systems.

In contrast, open-source models allow for targeted fine-tuning, guardrail design and optimization for specific use cases. This matters more in an agentic future, where models are no longer monolithic general-purpose tools, but interchangeable components within dynamic workflows. The ability to finely shape model behavior, at low cost and with full transparency, becomes a major competitive advantage when deploying task-specific agents or tightly regulated solutions.

“In practice, we foresee an agentic future where model selection is abstracted away,” Guarrera said.

For example, a user may draft an email with one AI tool, summarize legal docs with another, search enterprise documents with a fine-tuned open-source model and interact with AI locally through an on-device LLM, all without ever knowing which model is doing what. 

“The real question becomes: what mix of models best suits your workflow’s specific demands?” Guarrera said.

Considering total cost of ownership

With open models, the basic idea is that the model is freely available for use. While in contrast, enterprises always pay for closed models.

The reality when it comes to considering total cost of ownership (TCO) is more nuanced.

Praveen Akkiraju, Managing Director at Insight Partners explained to VentureBeat that TCO has many different layers. A few key considerations include infrastructure hosting costs and engineering: Are the open-source models self-hosted by the enterprise or the cloud provider? How much engineering, including fine-tuning, guard railing and security testing, is needed to operationalize the model safely? 

Akkiraju noted that fine-tuning an open weights model can also sometimes be a very complex task. Closed frontier model companies spend enormous engineering effort to ensure performance across multiple tasks. In his view, unless enterprises deploy similar engineering expertise, they will face a complex balancing act when fine-tuning open source models. This creates cost implications when organizations choose their model deployment strategy. For example, enterprises can fine-tune multiple model versions for different tasks or use one API for multiple tasks.

Ryan Gross, Head of Data & Applications at cloud native services provider Caylent told VentureBeat that from his perspective, licensing terms don’t matter, except for in edge case scenarios. The largest restrictions often pertain to model availability when data residency requirements are in place. In this case, deploying an open model on infrastructure like Amazon SageMaker may be the only way to get a state-of-the-art model that still complies. When it comes to TCO, Gross noted that the tradeoff lies between per-token costs and hosting and maintenance costs. 

“There is a clear break-even point where the economics switch from closed to open models being cheaper,” Gross said. 

In his view, for most organizations, closed models, with the hosting and scaling solved on the organization’s behalf, will have a lower TCO. However, for large enterprises, SaaS companies with very high demand on their LLMs, but simpler use-cases requiring frontier performance, or AI-centric product companies, hosting distilled open models can be more cost-effective.

How one enterprise software developer evaluated open vs closed models

Josh Bosquez, CTO at Second Front Systems is among the many firms that have had to consider and evaluate open vs closed models. 

“We use both open and closed AI models, depending on the specific use case, security requirements and strategic objectives,” Bosquez told VentureBeat.

Bosquez explained that open models allow his firm to integrate cutting-edge capabilities without the time or cost of training models from scratch. For internal experimentation or rapid prototyping, open models help his firm to iterate quickly and benefit from community-driven advancements.

“Closed models, on the other hand, are our choice when data sovereignty, enterprise-grade support and security guarantees are essential, particularly for customer-facing applications or deployments involving sensitive or regulated environments,” he said. “These models often come from trusted vendors, who offer strong performance, compliance support, and self-hosting options.”

Bosquez said that the model selection process is cross-functional and risk-informed, evaluating not only technical fit but also data handling policies, integration requirements and long-term scalability.

Looking at TCO, he said that it varies significantly between open and closed models and neither approach is universally cheaper. 

“It depends on the deployment scope and organizational maturity,” Bosquez said. “Ultimately, we evaluate TCO not just on dollars spent, but on delivery speed, compliance risk and the ability to scale securely.”

What this means for enterprise AI strategy

For smart tech decision-makers evaluating AI investments in 2025, the open vs. closed debate isn’t about picking sides. It’s about building a strategic portfolio approach that optimizes for different use cases within your organization.

The immediate action items are straightforward. First, audit your current AI workloads and map them against the decision framework outlined by the experts, considering accuracy requirements, latency needs, cost constraints, security demands and compliance obligations for each use case. Second, honestly assess your organization’s engineering capabilities for model fine-tuning, hosting and maintenance, as this directly impacts your true total cost of ownership.

Third, begin experimenting with model orchestration platforms that can automatically route tasks to the most appropriate model, whether open or closed. This positions your organization for the agentic future that industry leaders, such as EY’s Guarrera, predict, where model selection becomes invisible to end-users.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMeta is offering multi-million pay for AI researchers, but not $100M ‘signing bonuses’
Next Article Carrier, IBM launch AI-driven maintenance upgrades
Advanced AI Editor
  • Website

Related Posts

CFOs want AI that pays: real metrics, not marketing demos

June 27, 2025

OpenAI’s API lead on how enterprises are succeeding with agents

June 27, 2025

Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

June 27, 2025
Leave A Reply Cancel Reply

Latest Posts

‘Squid Game’ Star Lee Jung-Jae Talks Casting, Gi-Hun And Season 3

At Proper Hotels, Come For Vacation, Stay For The Live Music

New EU Law Aimed at Art Trafficking Goes Into Effect on June 28

Peek Inside ‘Leading Hotels Of The World’ With Luxe Travel Book ‘Culture’

Latest Posts

S4 Capital’s Monks Showcases Strategic Moves with Tech Innovators to Supercharge AI-Powered Creativity

June 28, 2025

Alibaba’s AI Model Outperforms Radiologists In Early Gastric Cancer Detection – Alibaba Gr Hldgs (NYSE:BABA)

June 28, 2025

The Download: Google DeepMind’s DNA AI, and heatwaves’ impact on the grid

June 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • S4 Capital’s Monks Showcases Strategic Moves with Tech Innovators to Supercharge AI-Powered Creativity
  • Alibaba’s AI Model Outperforms Radiologists In Early Gastric Cancer Detection – Alibaba Gr Hldgs (NYSE:BABA)
  • The Download: Google DeepMind’s DNA AI, and heatwaves’ impact on the grid
  • Judge denies OpenAI bid to delete data amid Tribune lawsuit
  • Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.