AI leaders like OpenAI and DeepMind see themselves as being in a race to build artificial general intelligence (AGI): a model capable of performing any intellectual task that a human can. At the same time, the US and Chinese governments see the AI race as a national-security priority that demands massive investments reminiscent of the Manhattan Project. In both cases, AI is seen as a new form of ‘hard power,’ accessible only to superpowers with vast computational resources.
But this view is both incomplete and increasingly outdated. Since the Chinese developer DeepSeek launched its lower-cost, competitively performing model earlier this year, we have been in a new era. No longer is the ability to build cutting-edge AI tools confined to a few tech giants. Multiple high-performance models have emerged around the world, showing that AI’s true potential lies in its potential to extend soft power.
Also Read: Is an AI winter upon us? There seems a chill in the air
The era of bigger-is-better models ended in 2024. Since then, model superiority has not been determined solely by scale. DeepSeek proved not only that top-tier models can be built without massive capital, but also that using advanced development techniques can radically accelerate AI progress. Dubbed the ‘Robin Hood of AI democratization,’ its decision to go open-source sparked a wave of innovation.
The OpenAI monopoly (or oligopoly of a few companies) of just a few months ago has given way to a multipolar, highly competitive landscape. Alibaba (Qwen) and Moonshot AI (Kimi) in China have also since released powerful open-source models, Sakana AI (my own company) in Japan has open-sourced AI innovations, and US giant Meta is investing heavily in its open-source Llama program.
Boasting state-of-the-art model performance is no longer sufficient. Consider AI chatbots: they can give ‘70-point’ answers to general questions, but they cannot achieve the ‘99-point’ precision or reliability needed for most real-world tasks—from loan evaluations to production scheduling that heavily rely on the collective know-how shared among the experts. The old framework in which foundation models were considered in isolation from specific applications has reached its limit.
Also Read: Free-AI: How well India adopts AI will shape its future
Real-world AI must now handle interdependent tasks, ambiguous procedures, conditional logic and exceptional cases—all messy variables that demand tightly integrated systems. Model developers must take more responsibility for the design of specific applications, and app developers must engage more deeply with the foundational technology.
Such integration matters for the future of geopolitics no less than it does for business. This is reflected in the concept of ‘sovereign AI,’ which calls for reducing one’s dependence on foreign technology suppliers in the name of national AI autonomy. Historically, the concern outside the US has been that by outsourcing critical infrastructure—search engines, social media, smartphones—to giant Silicon Valley firms, you incurred persistent digital trade deficits. Were AI to follow the same path, the economic losses could grow exponentially. Moreover, many worry about ‘kill switches’ that could shut off foreign-sourced AI infrastructure at any time. For all these reasons, domestic AI development is now seen as essential.
But sovereign AI doesn’t have to mean that every tool is domestically built. In fact, from a cost-efficiency and risk-diversification perspective, it is still better to mix and match models from around the world. The true goal of sovereign AI should not merely be to achieve self-sufficiency, but to amass AI soft power by building models that others want to adopt voluntarily.
Also Read: The great AI shake-up: TCS layoffs hint of much worse to come for the IT sector
Traditionally, soft power has referred to the appeal of ideas like democracy and human rights, cultural exports like Hollywood films, and, more recently, digital technologies and platforms. When diverse AI models coexist globally, the most widely adopted ones will become sources of subtle yet profound soft power, given how embedded they will be in people’s decision-making.
From the perspective of developers, public acceptance of their tools will be critical to success. Many potential users are already wary of Chinese AI systems (and US systems as well), owing to perceived risks of coercion, surveillance and privacy violations. It is easy to imagine that only the most trustworthy AIs will be fully embraced by governments, businesses and individuals. If Japan and Europe could offer such models, they would be well placed to earn the confidence of the Global South—a prospect with far-reaching geopolitical implications.
Trustworthy AI isn’t just about eliminating bias or preventing data leaks. It must also embody human-centric principles—enhancing rather than replacing people’s potential. If AI ends up concentrating wealth and power in the hands of a few, it will deepen inequality and erode social cohesion.
The AI story has just begun and it need not become a ‘winner-takes-all’ race. But in both the ageing North and the youthful Global South, AI-driven inequality may create lasting divides. It is in developers’ own interest to ensure that AI is a trusted tool of empowerment, not a pervasive instrument of control. ©2025/Project Syndicate
The author is a former Japanese diplomat and co-founder of Sakana AI.