The latest addition to the small model wave for enterprises comes from AI21 Labs, which is betting that bringing models to devices will free up traffic in data centers.
AI21’s Jamba Reasoning 3B, a “tiny” open-source model that can run extended reasoning, code generation and respond based on ground truth. Jamba Reasoning 3B handles more than 250,000 tokens and can run inference on edge devices.
The company said Jamba Reasoning 3B works on devices such as laptops and mobile phones.
Ori Goshen, co-CEO of AI21, told VentureBeat that the company sees more enterprise use cases for small models, mainly because moving most inference to devices frees up data centers.
“What we're seeing right now in the industry is an economics issue where there are very expensive data center build-outs, and the revenue that is generated from the data centers versus the depreciation rate of all their chips shows the math doesn't add up,” Goshen said.
He added that in the future “the industry by and large would be hybrid in the sense that some of the computation will be on devices locally and other inference will move to GPUs.”
Tested on a MacBook
Jamba Reasoning 3B combines the Mamba architecture and Transformers to allow it to run a 250K token window on devices. AI21 said it can do 2-4x faster inference speeds. Goshen said the Mamba architecture significantly contributed to the model’s speed.
Jamba Reasoning 3B’s hybrid architecture also allows it to reduce memory requirements, thereby reducing its computing needs.
AI21 tested the model on a standard MacBook Pro and found that it can process 35 tokens per second.
Goshen said the model works best for tasks involving function calling, policy-grounded generation and tool routing. He said that simple requests, such as asking for information about a forthcoming meeting and asking the model to create an agenda for it, could be done on devices. The more complex reasoning tasks can be saved for GPU clusters.
Small models in enterprise
Enterprises have been interested in using a mix of small models, some of which are specifically designed for their industry and some that are condensed versions of LLMs.
In September, Meta released MobileLLM-R1, a family of reasoning models ranging from 140M to 950M parameters. These models are designed for math, coding and scientific reasoning rather than chat applications. MobileLLM-R1 can run on compute-constrained devices.
Google’s Gemma was one of the first small models to come to the market, designed to run on portable devices like laptops and mobile phones. Gemma has since been expanded.
Companies like FICO have also begun building their own models. FICO launched its FICO Focused Language and FICO Focused Sequence small models that will only answer finance-specific questions.
Goshen said the big difference their model offers is that it’s even smaller than most models and yet it can run reasoning tasks without sacrificing speed.
Benchmark testing
In benchmark testing, Jamba Reasoning 3B demonstrated strong performance compared to other small models, including Qwen 4B, Meta’s Llama 3.2B-3B, and Phi-4-Mini from Microsoft.
It outperformed all models on the IFBench test and Humanity’s Last Exam, although it came in second to Qwen 4 on MMLU-Pro.
Goshen said another advantage of small models like Jamba Reasoning 3B is that they are highly steerable and provide better privacy options to enterprises because the inference is not sent to a server elsewhere.
“I do believe there’s a world where you can optimize for the needs and the experience of the customer, and the models that will be kept on devices are a large part of it,” he said.