Alibaba Group Holding and Zhipu AI have launched new open-source models as China’s rivalry with the US in artificial intelligence heats up.
On Tuesday, Alibaba released Wan2.2, which it claimed was the industry’s “first open-source large video generation models incorporating the Mixture-of-Experts (MoE) architecture”. Alibaba owns the South China Morning Post.
MoE is a machine-learning approach that divides an AI model into separate sub-networks, or experts – each focused on a subset of the input data – to jointly perform a task. It enables models to be pre-trained with far less computing power.
The Wan2.2 series features a text-to-video model Wan2.2-T2V-A14B, an image-to-video model Wan2.2-I2V-A14B, and Wan2.2-TI2V-5B, a hybrid model that supports both text-to-video and image-to-video generation tasks.

Separately, Zhipu, one of China’s four “AI tigers”, launched a new generation of its GLM series on Monday. The new models, which include GLM-4.5 with 355 billion parameters and a more streamlined GLM-4.5-Air with 106 billion parameters, were built on a fully self-developed architecture, according to the company.
GLM-4.5 was China’s “most advanced open-source MoE model”, as it secured third place globally and first place among both domestic and open-source models based on the average score across “12 representative benchmarks”, Zhipu said on Monday.