Alibabahas made another major move in the AI race by launching a new series of largelanguage models called Qwen3, designed to compete directly with OpenAI’sChatGPT and Google’s Gemini. The announcement came on Monday through a detailedpost on X, where the Chinese tech giant outlined the capabilities of its latestAI models.
“We areexcited to announce the release of Qwen3, the latest addition to the Qwenfamily of large language models,” Alibaba stated in an official blog post. Theflagship model, Qwen3-235B-A22B, reportedly delivers impressive resultsin areas such as mathematics, coding, and general reasoning. According toAlibaba, its performance rivals or even surpasses top-tier models likeDeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro.
Onestandout feature of Qwen3 is its multilingual support. The models areequipped to understand and generate content in 119 languages, includingIndian languages like Hindi, Gujarati, Marathi, Punjabi, Bengali, Sindhi, andeven region-specific dialects such as Chhattisgarhi, Maithili, and Awadhi.
The Qwen3family comprises eight models, with sizes ranging from 0.6 billion to235 billion parameters. This includes both dense models and Mixtureof Experts (MoE) architectures, providing options that cater to differentperformance needs and computational constraints.
Alibabahighlights the power of its top-tier models:“The small MoE model, Qwen3-30B-A3B, outcompetes QwQ-32B with 10 times ofactivated parameters, and even a tiny model like Qwen3-4B can rival theperformance of Qwen2.5-72B-Instruct,” the company noted.
Topromote open research and development, Alibaba is releasing open-weightversions of both large and compact models. These include the 235B and 30Bparameter MoE models, along with six dense models under the Apache 2.0 license:Qwen3-32B, 14B, 8B, 4B, 1.7B, and 0.6B.
Themodels are readily available on platforms like Hugging Face, ModelScope,and Kaggle, with both pre-trained and post-trained variants. Fordeployment, Alibaba recommends tools like SGLang and vLLM, whilelocal setups can use Ollama, LMStudio, MLX, llama.cpp,or KTransformers.
One ofthe most innovative aspects of Qwen3 is its scalable performancearchitecture. Users can customise the AI’s output quality based onavailable compute resources, striking a balance between speed, cost, and depthof understanding. This is particularly useful for coding and complex,multi-step reasoning tasks.
Qwen3also introduces a unique hybrid thinking system, offering twooperational modes. The “thinking” mode takes a more deliberate, step-by-stepapproach, ideal for in-depth tasks. In contrast, the “non-thinking” modedelivers faster, instant responses when needed. “This flexibility allows usersto control how much ‘thinking’ the model performs based on the task at hand,”says Alibaba.
Byfocusing on customisation, multilingual reach, and powerful reasoning, Qwen3signals Alibaba’s serious intent to stand shoulder-to-shoulder with global AIleaders.