Reflection AI, a startup founded by two former Google DeepMind researchers, has raised $2 billion at a $8 billion valuation. This has been a huge leap from its prior $545 million valuation.
While the startup was originally focused on autonomous coding agents, it is now positioning itself as both an open source alternative to closed frontier labs like OpenAI and Anthropic, as well as a Western alternative to Chinese AI company DeepSeek.
The funding round saw participation from prominent investors including Nvidia, former Google CEO Eric Schmidt, Citi and Donald Trump Jr.-backed private equity firm 1789 Capital, as well as existing investors Lightspeed and Sequoia.
READ: Metaverse: Sony embraces virtual world, others are not far behind (May 19, 2022)
Founded in 2024 by former DeepMind researchers Misha Laskin and Ioannis Antonoglou, Reflection develops tools that automate software development, a fast-growing use case for AI.
The company announced after the fundraising that it has recruited a team of top talent from DeepMind and OpenAI, and built an advanced AI training stack that it promises will be open for all. It also claims that it has “identified a scalable commercial model that aligns with our open intelligence strategy.”
Reflection AI’s team currently numbers around 60 people, who mostly include AI researchers and engineers across infrastructure, data training, and algorithm development, according to Laskin, who’s also the company’s CEO. Reflection AI has secured a compute cluster and hopes to release a frontier language model next year that’s trained on “tens of trillions of tokens,” he told TechCrunch.
“We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale,” Reflection AI wrote in a post on X. “We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning.”
MoE is a specific architecture that powers frontier LLMs–systems that previously only large, closed AI labs were capable of training at scale. DeepSeek was the first company that managed to train models at scale in an open way. Other Chinese models like Qwen and Kimi followed suit.
“DeepSeek and Qwen and all these models are our wake-up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else,” Laskin said. “It won’t be built by America.”
Reflection AI hasn’t released its first model yet. According to Laskin, the first model will be largely text-based with multimodal capabilities in the future, according to Laskin. It will use the funds from this latest round to get the compute resources needed to train the new models, the first of which it is aiming to release early next year.