BEIJING, July 31, 2025 /PRNewswire/ — Another Chinese artificial intelligence (AI) large language model is now open-source: Chinese AI company Zhipu AI, which was once named by OpenAI as a “global competitor,” launched its new-generation flagship model, the GLM-4.5, on Monday evening. Designed as a foundational model for agent applications, the GLM-4.5 has achieved technical breakthroughs by integrating capabilities like complex reasoning, code generation, and agent interaction, leading global rankings in comprehensive test performance, the Global Times learned from the company on Monday.
Following this open-source release of another world-leading large-scale model by a Chinese AI company, global media outlets and international social media platforms swiftly turned their attention to the performance and open-source achievements of this domestic model. Within two hours of the GLM-4.5 model’s release, X featured it on its homepage. Within 12 hours of release, it has ranked second globally on the AI community Hugging Face’s trending list.
The Global Times recently conducted an exclusive interview with Zhipu’s CEO, Zhang Peng, to discuss the latest features and breakthroughs of GLM-4.5, the rationale behind its full open-source strategy, and its impact on the development of Chinese large-scale models.
Among world’s top models
The new flagship large model GLM-4.5 achieves open-source SOTA (state-of-the-art) performance in reasoning, coding, and agent capabilities. Evaluated across 12 representative benchmarks for comprehensive model assessment, GLM-4.5 ranks third globally, first among Chinese models, and first among open-source models in average composite score, according to Zhipu.
The model excels in full-stack development tasks, creating fairly complex applications, games (For example, a Flappy Bird game with smooth data processing and interactive animation design), and interactive webpages. For instance, users can leverage GLM-4.5 to build a functional search engine or a text-based short-video platform that allows “like” functionality.
With double the parameter efficiency of its predecessors, GLM-4.5’s API costs are as low as 0.8 yuan($0.11) per million input tokens and 2 yuan per million output tokens, far below mainstream models. Its processing speed exceeds 100 tokens per second, embodying a low-cost, high-speed approach, Zhang noted. Despite having half the parameters of DeepSeek R1 and one-third of Kimi K2, GLM-4.5 outperforms in multiple benchmark tests due to its superior parameter efficiency, Zhang said.
Zhang explained that GLM-4.5, built on “agentic AI,” breaks tasks into smaller units for higher precision. Its open-source nature allows developers free access, and its lightweight design requires only eight NVIDIA H20 chips – half the hardware of DeepSeek’s model.
Although H20 chips, tailored for China amid US export restrictions, may face supply delays, Zhang confirmed Zhipu’s sufficient computing resources without needing additional purchases.
Why choose open-source?
In addition to the highly regarded performance of this domestic large model, a key highlight is its full open-source release under the MIT license, the most permissive open-source protocol. Why opt for open-sourcing at this juncture, and what impact will it have on Chinese large-scale model development?
Zhang told the Global Times that Zhipu, one of China’s earliest AI companies to embrace open-sourcing, has a history of such moves. In 2022, it open-sourced a 100-billion-parameter base model, sparking global interest, with companies like Apple and OpenAI downloading it for analysis. In early 2023, as ChatGPT gained traction, Zhipu released a 6-billion-parameter model runnable on laptops directly, inspiring more Chinese firms to adopt open-source strategies and raising public awareness.
On the rationale for this open-source release, Zhang emphasized its strategic importance for academia and national strategies. Additionally, open-sourcing at this moment encourages industry and academia to rethink AI development, AGI prospects, and new computing paradigms.
The CNBC commented on Monday that Chinese companies are making smarter AI models that are increasingly cheaper to use, echoing key aspects of DeepSeek’s market-shaking breakthrough.
US tech media outlet Techi on Monday said in an article that the introduction of the AI model GLM-4.5 contributes to the increasing surge of AI models by Chinese tech firms. “A trend is very evident in terms of developing AI in China and making it easily accessible,” it said.
With more firms offering open-source models at affordable prices, Techi added, “China is becoming a key player in the global race to lead in AI.”
“The company, a spin-off from Tsinghua University, continues to expand its GLM (General Language Model) series, aiming to make advanced AI more accessible. This represents a step toward more autonomous AI systems,” read an article on media outlet Data Quest.
Indian media outlet Mint suggested that “the announcement of GLM-4.5 follows a period of intensified activity among Chinese AI startups, many of which are seeking both market visibility and credibility in a sector dominated by American firms. The release of more powerful and accessible models is likely to add momentum to the competition, both within China and internationally.”
Bi Qi, chief scientist at China Telecom and a Nokia Bell Labs academician, told the Global Times that while most US AI firms adopt closed-source models, China’s open-source approach as a challenger has the potential to exert certain commercial impact on the leaders. Despite US technological advantages, China also has considerable advantages in terms of engineering and market scale.
China’s OpenAI?
Some Western media outlets try to label Zhipu as “China’s OpenAI.” How does Zhipu’s development path compare with OpenAI’s?
Zhang noted that both companies, as early pioneers in their countries, began training large-scale AI models relatively early. OpenAI, founded earlier, started training large models in 2018, initially combining industry and academia with many Stanford students involved in research. Zhipu, incubated from Tsinghua University in 2018, began exploring large models in 2019 and trained its first model in 2020, a year behind OpenAI but one of the earliest groups in China.
“In 2020, when we trained our model, many in China didn’t understand why we pursued a general model with tens or hundreds of billions of parameters. At the time, the domestic focus was on training smaller models with 100 million parameters for specific tasks,” Zhang recalled.
The second similarity is that both strive to push the boundaries of diverse capabilities. Zhang explained that “current models are not truly universal: some excel in programming, others in mathematics or reasoning, but none achieve top performance across all tasks.” He noted that GLM-4.5 aims to unify these capabilities within a single model. The next paradigm for large models will integrate diverse abilities, creating an “all-rounder” akin to a human with increasingly universal capabilities.
Zhang stated that leading global AI companies are gradually reaching a consensus: foundational models are crucial. “We share this view with OpenAI, but differ in approach. Unlike OpenAI’s closed system, we adopt an open strategy to advance science and technology, fostering industry-academia collaboration while focusing on continuously enhancing the capabilities of our strongest foundational model.”
Exploring AI’s capability frontiers
The US recently unveiled America’s “AI Action Plan,” dubbed the “AI version of Star Wars” by many people. What potential impacts could this have on the AI competition landscape between China and the US, as well as the global AI landscape?
Zhang views this as a US effort to reinforce its AI dominance, akin to the previous “Star Wars” program, potentially sparking new competition through heavy national investment.
However, he stressed that AI’s essence lies in exploring its capability ceiling, particularly AGI’s limits, where costs are unpredictable. “Without effort, we risk falling behind, but engaging in such competition demands significant resources, and requiring industry-academia collaboration to push toward AI’s upper limits.”
For Zhipu’s future, Zhang aims to deepen understanding of AGI’s direction, explore intelligence boundaries with global scientists, and address China’s AI development needs.
China has released 1,509 large artificial intelligence (AI) models, the highest number globally, accounting for a substantial share of the 3,755 models launched worldwide to date, according to information presented at the 2025 World AI Conference, the Xinhua News Agency reported.
China has established a comprehensive AI industrial system covering foundational infrastructure and sector-specific applications. According to data from the China Academy of Information and Communications Technology (CAICT), the country is home to over 5,100 AI companies, accounting for roughly 15 percent of the global total. Additionally, China boasts 71 AI unicorns, representing about 26 percent of the world’s 271, according to Xinhua.
Cision
View original content:https://www.prnewswire.com/news-releases/global-times-global-media-spotlight-chinas-ai-advances-as-new-model-shines-with-open-source-release-zhipu-ceo-shares-backstage-stories-302518410.html