Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
As demand for large-scale AI deployment skyrockets, the lesser-known, private chip startup Positron is positioning itself as a direct challenger to market leader Nvidia by offering dedicated, energy-efficient, memory-optimized inference chips aimed at relieving the industry’s mounting cost, power, and availability bottlenecks.
“A key differentiator is our ability to run frontier AI models with better efficiency—achieving 2x to 5x performance per watt and dollar compared to Nvidia,” said Thomas Sohmers, Positron co-founder and CTO, in a recent video call interview with VentureBeat.
Obviously, that’s good news for big AI model providers, but Positron’s leadership contends it is helpful for many more enterprises beyond, including those using AI models in their workflows, not as service offerings to customers.
“We build chips that can be deployed in hundreds of existing data centers because they don’t require liquid cooling or extreme power densities,” pointed out Mitesh Agrawal, Positron’s CEO and the former chief operating officer of AI cloud inference provider Lambda, also in the same video call interview with VentureBeat.
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
Venture capitalists and early users seem to agree.
Positron yesterday announced an oversubscribed $51.6 million Series A funding round led by Valor Equity Partners, Atreides Management and DFJ Growth, with support from Flume Ventures, Resilience Reserve, 1517 Fund and Unless.
As for Positron’s early customer base, that includes both name-brand enterprises and companies operating in inference-heavy sectors. Confirmed deployments include the major security and cloud content networking provider Cloudflare, which uses Positron’s Atlas hardware in its globally distributed, power-constrained data centers, and Parasail, via its AI-native data infrastructure platform SnapServe.
Beyond these, Positron reports adoption across several key verticals where efficient inference is critical, such as networking, gaming, content moderation, content delivery networks (CDNs), and Token-as-a-Service providers.
These early users are reportedly drawn in by Atlas’s ability to deliver high throughput and lower power consumption without requiring specialized cooling or reworked infrastructure, making it an attractive drop-in option for AI workloads across enterprise environments.
Entering a challenging market that is decreasing AI model size and increasing efficiency
But Positron is also entering a challenging market. The Information just reported that rival buzzy AI inference chip startup Groq — where Sohmers previously worked as Director of Technology Strategy — has reduced its 2025 revenue projection from $2 billion+ to $500 million, highlighting just how volatile the AI hardware space can be.
Even well-funded firms face headwinds as they compete for data center capacity and enterprise mindshare against entrenched GPU providers like Nvidia, not to mention the elephant in the room: the rise of more efficient, smaller large language models (LLMs) and specialized small language models (SLMs) that can even run on devices as small and low-powered as smartphones.
Yet Positron’s leadership is for now embracing the trend and shrugging off the possible impacts on its growth trajectory.
“There’s always been this duality—lightweight applications on local devices and heavyweight processing in centralized infrastructure,” said Agrawal. “We believe both will keep growing.”
Sohmers agreed, stating: “We see a future where every person might have a capable model on their phone, but those will still rely on large models in data centers to generate deeper insights.”
Atlas is an inference-first AI chip
While Nvidia GPUs helped catalyze the deep learning boom by accelerating model training, Positron argues that inference — the stage where models generate output in production — is now the true bottleneck.
Its founders call it the most under-optimized part of the “AI stack,” especially for generative AI workloads that depend on fast, efficient model serving.
Positron’s solution is Atlas, its first-generation inference accelerator built specifically to handle large transformer models.
Unlike general-purpose GPUs, Atlas is optimized for the unique memory and throughput needs of modern inference tasks.
The company claims Atlas delivers 3.5x better performance per dollar and up to 66% lower power usage than Nvidia’s H100, while also achieving 93% memory bandwidth utilization—far above the typical 10–30% range seen in GPUs.
From Atlas to Titan, supporting multi-trillion parameter models
Launched just 15 months after founding — and with only $12.5 million in seed capital — Atlas is already shipping and in production.
The system supports up to 0.5 trillion-parameter models in a single 2kW server and is compatible with Hugging Face transformer models via an OpenAI API-compatible endpoint.
Positron is now preparing to launch its next-generation platform, Titan, in 2026.
Built on custom-designed “Asimov” silicon, Titan will feature up to two terabytes of high-speed memory per accelerator and support models up to 16 trillion parameters.
Today’s frontier models are in the hundred billions and single digit trillions of parameters, but newer models like OpenAI’s GPT-5 are presumed to be in the multi-trillions, and larger models are currently thought to be required to reach artificial general intelligence (AGI), AI that outperforms humans on most economically valuable work, and superintelligence, AI that exceeds the ability for humans to understand and control.
Crucially, Titan is designed to operate with standard air cooling in conventional data center environments, avoiding the high-density, liquid-cooled configurations that next-gen GPUs increasingly require.
Engineering for efficiency and compatibility
From the start, Positron designed its system to be a drop-in replacement, allowing customers to use existing model binaries without code rewrites.
“If a customer had to change their behavior or their actions in any way, shape or form, that was a barrier,” said Sohmers.
Sohmers explained that instead of building a complex compiler stack or rearchitecting software ecosystems, Positron focused narrowly on inference, designing hardware that ingests Nvidia-trained models directly.
“CUDA mode isn’t something to fight,” said Agrawal. “It’s an ecosystem to participate in.”
This pragmatic approach helped the company ship its first product quickly, validate performance with real enterprise users, and secure significant follow-on investment. In addition, its focus on air cooling versus liquid cooling makes its Atlas chips the only option for some deployments.
“We’re focused entirely on purely air-cooled deployments… all these Nvidia Hopper- and Blackwell-based solutions going forward are required liquid cooling… The only place you can put those racks are in data centers that are being newly built now in the middle of nowhere,” said Sohmers.
All told, Positron’s ability to execute quickly and capital-efficiently has helped distinguish it in a crowded AI hardware market.
Memory is what you need
Sohmers and Agrawal point to a fundamental shift in AI workloads: from compute-bound convolutional neural networks to memory-bound transformer architectures.
Whereas older models demanded high FLOPs (floating-point operations), modern transformers require massive memory capacity and bandwidth to run efficiently.
While Nvidia and others continue to focus on compute scaling, Positron is betting on memory-first design.
Sohmers noted that with transformer inference, the ratio of compute to memory operations flips to near 1:1, meaning that boosting memory utilization has a direct and dramatic impact on performance and power efficiency.
With Atlas already outperforming contemporary GPUs on key efficiency metrics, Titan aims to take this further by offering the highest memory capacity per chip in the industry.
At launch, Titan is expected to offer an order-of-magnitude increase over typical GPU memory configurations — without demanding specialized cooling or boutique networking setups.
U.S.-built chips
Positron’s production pipeline is proudly domestic. The company’s first-generation chips were fabricated in the U.S. using Intel facilities, with final server assembly and integration also based domestically.
For the Asimov chip, fabrication will shift to TSMC, though the team is aiming to keep as much of the rest of the production chain in the U.S. as possible, depending on foundry capacity.
Geopolitical resilience and supply chain stability are becoming key purchasing criteria for many customers — another reason Positron believes its U.S.-made hardware offers a compelling alternative.
What’s next?
Agrawal noted that Positron’s silicon targets not just broad compatibility but maximum utility for enterprise, cloud, and research labs alike.
While the company has not named any frontier model providers as customers yet, he confirmed that outreach and conversations are underway.
Agrawal emphasized that selling physical infrastructure based on economics and performance—not bundling it with proprietary APIs or business models—is part of what gives Positron credibility in a skeptical market.
“If you can’t convince a customer to deploy your hardware based on its economics, you’re not going to be profitable,” he said.