Alibaba Group Holding on Friday open-sourced its latest artificial intelligence model, built on the new Qwen3-Next architecture, which is 10 times more powerful but costs only a 10th to build compared to its predecessor.
The company’s Qwen team said it adopted an array of architectural innovations aimed at maximising performance while minimising computational costs, according to a note published on the developer platform GitHub.
The model is developed by Alibaba Cloud, the AI and cloud computing services unit of Alibaba, owner of the South China Morning Post.
Compared with its predecessor, Qwen3-32B, which was released in April as part of the Qwen3 family, the new model – Qwen3-Next-80B-A3B, with 80 billion parameters – cost about a 10th as much to train and performed 10 times faster than its predecessor in certain tasks, the team said in a separate note published on Hugging Face, where it has uploaded the new model.
It also matched the performance of the company’s flagship model Qwen3-235B-A22B, Alibaba Cloud said in a statement, adding that the new models are optimised for efficient deployment and operation on consumer-grade hardware.
The new model reflected how Alibaba Cloud and other mainland AI firms are continuing to narrow the gap with their US peers through the open-source approach, which makes the source code of AI models available for third-party developers to use, modify and distribute.
It also marked the latest example of how Hangzhou-based Alibaba Cloud has built Qwen into the world’s largest open-source AI ecosystem for developers.
The team also launched a reasoning version model under the Qwen3-Next architecture. The Qwen3-Next-80B-A3B-Thinking excelled over the company’s own Qwen3-32B-Thinking and Google’s Gemini-2.5-Flash-Thinking across multiple areas, citing third-party benchmark tests.

In June, the Qwen team launched open-source Qwen3 models optimised for Apple’s MLX framework for machine learning. This followed the Post’s report in February that Apple had struck a deal to use Alibaba’s Qwen models for Apple Intelligence on the mainland, citing sources familiar with the matter. Internationally, Apple Intelligence uses OpenAI’s GPT models.
For Qwen3-Next, the developer team said the new model’s efficiency boost came from a combination of techniques. These included “hybrid attention”, which made it easier to process long text inputs, and a “high-sparsity mixture-of-experts” (MoE) architecture that strikes a balance between an AI system’s performance and efficiency.
MoE architecture divides a model into separate sub-networks, or “experts”, that specialise in a subset of the input data to jointly perform a task.
Other changes included a “multi-token prediction” strategy and improvements on model stability during the training process, according to the Qwen team’s note.
The latest Qwen model followed last Friday’s release of Alibaba Cloud’s biggest AI model to date, the Qwen-3-Max-Preview, with more than 1 trillion parameters. While a higher number of parameters generally means stronger capabilities, it also means more computational power is needed to train and run the model.
Qwen-3-Max-Preview debuted in sixth place in the latest “text arena” ranking by LMArena, an AI model evaluation platform started by researchers at the University of California, Berkeley.