Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
Liquid AI, the startup formed by former Massachusetts Institute of Technology (MIT) researchers to develop novel AI model architectures beyond the widely-used “Transformers“, today announced the release of LEAP aka the “Liquid Edge AI Platform,” a cross-platform software development kit (SDK) designed to make it easier for developers to integrate small language models (SLMs) directly into mobile applications.
Alongside LEAP, the company also introduced Apollo, a companion iOS app for testing these models locally, furthering Liquid AI’s mission to enable privacy-preserving, efficient AI on consumer hardware.
The LEAP SDK arrives at a time when many developers are seeking alternatives to cloud-only AI services due to concerns over latency, cost, privacy, and offline availability.
LEAP addresses those needs head-on with a local-first approach that allows small models to run directly on-device, reducing dependence on cloud infrastructure.
Built for mobile devs with no prior ML experience required
LEAP is designed for developers who want to build with AI but may not have deep expertise in machine learning.
According to Liquid AI, the SDK can be added to an iOS or Android project with just a few lines of code, and calling a local model is meant to feel as familiar as interacting with a traditional cloud API.
“Our research shows developers are moving beyond cloud-only AI and looking for trusted partners to help them build on-device,” said Ramin Hasani, co-founder and CEO of Liquid AI, in a blog post announcing the news today on Liquid’s website. “LEAP is our answer—a flexible, end-to-end deployment platform built from the ground up to make powerful, efficient, and private edge AI truly accessible.”
Once integrated, developers can select a model from the built-in LEAP model library, which includes compact models as small as 300MB — lightweight enough for modern phones with as little as 4GB of RAM (!!) and up.
The SDK handles local inference, memory optimization, and device compatibility, simplifying the typical edge deployment process.
LEAP is OS- and model-agnostic by design. At launch, it supports both iOS and Android, and offers compatibility with Liquid AI’s own Liquid Foundation Models (LFMs) as well as many popular open-source small models.
The goal: a unified ecosystem for edge AI
Beyond model execution, LEAP positions itself as an all-in-one platform for discovering, adapting, testing, and deploying SLMs for edge use.
Developers can browse a curated model catalog with various quantization and checkpoint options, allowing them to tailor performance and memory footprint to the constraints of the target device.
Liquid AI emphasizes that large models tend to be generalists, while small models often perform best when optimized for a narrow set of tasks. LEAP’s unified system is structured around that principle, offering tools for rapid iteration and deployment in real-world mobile environments.
The SDK also comes with a developer community hosted on Discord, where Liquid AI offers office hours, support, events, and competitions to encourage experimentation and feedback.
Apollo: like Testflight for local AI models
To complement LEAP, Liquid AI also released Apollo, a free iOS app that lets developers and users interact with LEAP-compatible models in a local, offline setting.
Originally a separate mobile app startup that allowed users to chat with LLMs privately on device, which Liquid acquired earlier this year, Apollo has been rebuilt to support the entire LEAP model library.
Apollo is designed for low-friction experimentation — developers can “vibe check” a model’s tone, latency, or output behavior right on their phones before integrating it into a production app. The app runs entirely offline, preserving user privacy and reducing reliance on cloud compute.
Whether used as a lightweight dev tool or a private AI assistant, Apollo reflects Liquid AI’s broader push to decentralize AI access and execution.
Built on the back of the LFM2 model family announced last week
LEAP SDK release builds on Liquid AI’s July 10 announcement of LFM2, its second-generation foundation model family designed specifically for on-device workloads.
LFM2 models come in 350M, 700M, and 1.2B parameter sizes, and benchmark competitively with larger models in speed and accuracy across several evaluation tasks.
These models form the backbone of the LEAP model library and are optimized for fast inference on CPUs, GPUs, and NPUs.
Free and ready for devs to start building
LEAP is currently free to use under a developer license that includes the core SDK and model library.
Liquid AI notes that premium enterprise features will be made available under a separate commercial license in the future, but that it is taking inquiries from enterprise customers already through its website contact form.
LFM2 models are also free for academic use and commercial use by companies with under $10 million in revenue, with larger organizations required to contact the company for licensing.
Developers can get started by visiting the LEAP SDK website, downloading Apollo from the App Store, or joining the Liquid AI developer community on Discord.