Brandon Wang is vice president of Synopsys.
AI performance is advancing with incredible speed and AI infrastructure spending is skyrocketing; according to IDC, it will surpass the $200B market in the next 5 years. But raw power alone isn’t enough. The real challenge is balancing performance with efficiency. DeepSeek and OpenAI, two major players in AI, understand that scaling models without optimizing cost, speed and quality isn’t sustainable. Instead, they make smart trade-offs between these factors using a strategic framework called “techonomics.” Understanding how these companies apply techonomics can give us a glimpse into the future of AI, where efficiency is just as important as capability.
Finding The AI “Sweet Spot”
Techonomics, a concept introduced by Aart de Geus, co-founder and executive chair of Synopsys, is a way of thinking about technology in terms of its economic trade-offs. It considers three key factors: quality of results (QoR), time to results (TTR) and cost of results (CoR). QoR measures how good an AI model’s output is, TTR looks at how quickly it delivers results and CoR looks at the resources needed to achieve that quality.
Techonomics asks a simple but critical question: Is the extra quality worth the extra cost and time? Engineers and business leaders must constantly balance these trade-offs to find the “sweet spot.” As AI models are becoming more advanced, the law of diminishing returns kicks in. Throwing more money and compute at a model won’t always lead to meaningful improvements; sometimes, optimization is the smarter path.
DeepSeek And OpenAI Apply Techonomics To Be Better, Not Just Bigger
Both DeepSeek and OpenAI are pushing AI forward by making models better, not just bigger. One way is through model distillation, which involves training smaller AI models to learn from larger ones. This keeps performance high while reducing compute costs. Another approach is quantization, which lowers the precision of computations to save memory and processing power. They also rely on specialized hardware, such as TPUs and GPUs, and advanced distributed training techniques to maximize efficiency.
Take DeepSeek for example. Instead of brute-force scaling, DeepSeek uses architectures like Mixture of Experts (MoE) and Multi-Head Latent Attention (MLA) to maximize performance while keeping costs manageable.
MoE works by activating only a small portion of the model at any given time. Each network is fine-tuned for specific tasks, which boosts efficiency and scalability. This means DeepSeek can handle massive models without requiring massive compute power. MLA, on the other hand, breaks information into smaller, specialized parts that are processed simultaneously. This reduces memory usage while allowing AI to focus on different aspects of data at once, improving accuracy.
DeepSeek’s philosophy is rooted in algorithmic advancements instead of just scaling up compute. Moreover, their open-source approach allows the AI community to build on their innovations, which leads to more collaboration and transparency. This balance of cutting-edge technology and economic practicality gives them a competitive edge.
The Three Phases Of AI Evolution
Like prior waves of technology innovation, AI development is following three distinct phases, each shaped by the interplay between quality, time and cost. The first phase, early research and prototype development, is all about maximizing the quality of results. AI companies race to build groundbreaking models, and in the process prioritize performance over cost. Computational demands are sky-high, iteration cycles are long and experimentation is risky but necessary to push boundaries.
The second phase, scale-up and efficiency optimization, is where we are now. The focus has shifted from raw capability to making AI more practical. Developers are working to improve efficiency through innovations like MoE and model distillation. Compute resources are no longer unlimited, so cost control is a key priority. The goal is to keep improving AI while making it faster and cheaper to deploy.
The third phase, maturity and ecosystem integration, will define AI’s future. The technology will be embedded into everyday applications, from enterprise solutions to edge computing. Efficiency will be the top priority so AI can scale affordably without sacrificing quality. New hardware and software advancements will be instrumental in keeping AI both powerful and cost-effective.
What AI Developers Need To Know
Open-source AI is clearly a game changer. DeepSeek’s open-source approach allows more developers to contribute, which in turn accelerates AI progress. But only organizations with a strong advantage in computing power, advanced algorithms or massive datasets will dominate AI infrastructure.
The focus of scaling AI is shifting toward smarter architectures, better resource management and reduced compute overhead. The next wave of AI innovation will be about making it practical for real-world use at scale. DeepSeek and OpenAI are showing that efficiency and optimization are just as important as raw power.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?