Startup Deep Cogito Inc. launched today with a series of language models that it claims can outperform comparably sized open-source alternatives.
According to TechCrunch, the company was founded last June by former Google LLC staffers Drishan Arora and Dhruv Malhotra. Arora worked as a senior software engineer at the search giant. Malhotra, in turn, was a product manager at the Google DeepMind machine learning lab. The duo have raised an undisclosed amount of funding from South Park Commons.
Deep Cogito’s lineup of open-source language models is known as the Cogito v1 series. The algorithms are available in five sizes ranging from 3 billion to 70 billion parameters. They’re based on the open-source Llama and Qwen language model families, which are developed by Meta Platforms Inc. and Alibaba Group Holding Ltd., respectively.
Deep Cogito’s models use a hybrid architecture. They combine elements of standard large language models, which answer simple prompts near-instantaneously, and reasoning models. Algorithms in the latter category spend more time generating an answer, which increases their output quality. Deep Cogito’s models can respond to prompts either instantly or perform more extensive reasoning depending on user preferences.
The company customized its models using a new training method it calls IDA. The technique shares some similarities with distillation, a widely used method of developing hardware-efficient language models.
With distillation, developers send a collection of prompts to a hardware-intensive LLM and save the answers. They then input those answers into a more efficient model. This latter model thereby absorbs some of the larger LLM’s knowledge, which means it can answer the same questions using less hardware.
Deep Cogito’s IDA method likewise uses an LLM’s prompt answers for training purposes. The difference is that those answers aren’t used to improve a different, more hardware-efficient model but rather the LLM that generated the answers.
Deep Cogito researchers detailed in a blog post today that the IDA workflow involves two steps.
First, an LLM generates an answer to a prompt using methods “similar” to the ones that reasoning models rely on to process data. Those methods increase the amount of time the LLM requires to produce output. Once the prompt response is ready, the LLM distills “the higher intelligence back to the model’s parameters to internalize the amplified capability,” the researchers explained.
“By repeating these two steps, each cycle builds upon the progress of the previous iteration,” they elaborated in the blog post. “This iterative framework creates a positive feedback loop.”
In an internal test, Deep Cogito compared its most advanced model with Meta’s Llama 3.3. Both algorithms feature 70 billion parameters. Deep Cogito says that its model outperformed Llama 3.3 across all seven of the benchmarks that were used in the evaluation.
The startup claims that its smaller models likewise outperform comparably-sized open-source alternatives. The algorithms feature 3 billion, 8 billion, 14 billion and 32 billion parameters, respectively. Deep Cogito plans to release new models over the next few weeks that will feature 109 billion to 671 billion parameters.
Image: Unsplash
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU