Soket AI will develop an open source 120 billion parameter foundation model optimised for the country’s linguistic diversity targetting sectors such as defence, healthcare and education. Gnani.ai will build a 14 billion parameter voice AI foundation model delivering multilingual, real-time speech processing with advanced reasoning capabilities.
Gan.ai will create a 70 billion parameter multilingual foundation model targeting superhuman text-to-speech (TTS) model capabilities.
The IndiaAI Mission also said that 34,333 GPUs are now available on the IndiaAI Compute portal. These include GPUs from Nvidia, AMD, AWS, and Intel. The offering comprises 15,100 H100 Nvidia GPUs, 8,192 Nvidia B200 GPUs, 4,812 Nvidia H200 GPUs, and 1,973 Nvidia L40S GPUs. The other GPU variants each comprise fewer than 1,000 or 500 in number.
All seven technically qualified bidders in the second round of the GPU tender are empanelled, it said.
The seven empanelled bidders are Cyfuture India, Ishan Infotech, Locuz Enterprise Solutions, Netmagic IT Services (NTT GDC India), Sify Digital Services, Vensysco Technologies, and Yotta Data Services.
Discover the stories of your interest
367 datasets have been uploaded on AI Kosh, Vaishnaw revealed. AI Kosh is the government’s platform that provides a repository of datasets, models and use cases to enable AI innovation. It also features AI sandbox capabilities.IndiaAI Mission on May 16 had said that it has received 506 foundation AI proposals across the three phases.
Given the overwhelming response and continued interest, the mission has decided to extend the deadline for submissions under phase 3 of the call for proposals. The earlier deadline was April 30.
“Further dates for submission of proposals, the acceptance of new applications post April 30, will be announced subsequently, as per requirements, once the examination of proposals already submitted has been completed,” IndiaAI said on its website.
As part of the first phase of approvals, Sarvam.ai was selected to initiate the development of an indigenous foundational model.
Sarvam’s multimodal, multi-scale model will have 70 billion parameters.
Sarvam cofounder Vivek Raghavan had said on April 26 that the model will be completed in six months.
Capable of reasoning, designed for voice, and fluent in Indian languages, Sarvam’s model will be ready for secure, population-scale deployment, the startup had said in a statement.
According to cofounder Pratyush Kumar, Sarvam is developing three model variants — Sarvam-Large for advanced reasoning and generation, Sarvam-Small for real-time interactive applications, and Sarvam-Edge for compact on-device tasks.
Sarvam will get access to 4,096 Nvidia H100 graphics processing units (GPUs) for six months from the IndiaAI Mission’s common compute cluster to train its model, people in the know had said.
“We are collaborating with AI4Bharat at IIT-Madras, a leader in Indian language AI research, to build these models,” Kumar had said.