Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

How Nippon India Mutual Fund improved the accuracy of AI assistant responses using advanced RAG methods on Amazon Bedrock

Stability AI is developing a legit ‘opt in’ artist marketplace… amid lawsuits over unauthorized scraping

NTT Data and Mistral AI join forces on managed sovereign enterprise AI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge’ model

By Advanced AI EditorApril 25, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Liquid AI, the Boston-based foundation model startup spun out of the Massachusetts Institute of Technology (MIT), is seeking to move the tech industry beyond its reliance on the Transformer architecture underpinning most popular large language models (LLMs) such as OpenAI’s GPT series and Google’s Gemini family.

Yesterday, the company announced “Hyena Edge,” a new convolution-based, multi-hybrid model designed for smartphones and other edge devices in advance of the International Conference on Learning Representations (ICLR) 2025.

The conference, one of the premier events for machine learning research, is taking place this year in Vienna, Austria.

New convolution-based model promises faster, more memory-efficient AI at the edge

Hyena Edge is engineered to outperform strong Transformer baselines on both computational efficiency and language model quality.

In real-world tests on a Samsung Galaxy S24 Ultra smartphone, the model delivered lower latency, smaller memory footprint, and better benchmark results compared to a parameter-matched Transformer++ model.

A new architecture for a new era of edge AI

Unlike most small models designed for mobile deployment — including SmolLM2, the Phi models, and Llama 3.2 1B — Hyena Edge steps away from traditional attention-heavy designs. Instead, it strategically replaces two-thirds of grouped-query attention (GQA) operators with gated convolutions from the Hyena-Y family.

The new architecture is the result of Liquid AI’s Synthesis of Tailored Architectures (STAR) framework, which uses evolutionary algorithms to automatically design model backbones and was announced back in December 2024.

STAR explores a wide range of operator compositions, rooted in the mathematical theory of linear input-varying systems, to optimize for multiple hardware-specific objectives like latency, memory usage, and quality.

Benchmarked directly on consumer hardware

To validate Hyena Edge’s real-world readiness, Liquid AI ran tests directly on the Samsung Galaxy S24 Ultra smartphone.

Results show that Hyena Edge achieved up to 30% faster prefill and decode latencies compared to its Transformer++ counterpart, with speed advantages increasing at longer sequence lengths.

Prefill latencies at short sequence lengths also outpaced the Transformer baseline — a critical performance metric for responsive on-device applications.

In terms of memory, Hyena Edge consistently used less RAM during inference across all tested sequence lengths, positioning it as a strong candidate for environments with tight resource constraints.

Outperforming Transformers on language benchmarks

Hyena Edge was trained on 100 billion tokens and evaluated across standard benchmarks for small language models, including Wikitext, Lambada, PiQA, HellaSwag, Winogrande, ARC-easy, and ARC-challenge.

On every benchmark, Hyena Edge either matched or exceeded the performance of the GQA-Transformer++ model, with noticeable improvements in perplexity scores on Wikitext and Lambada, and higher accuracy rates on PiQA, HellaSwag, and Winogrande.

These results suggest that the model’s efficiency gains do not come at the cost of predictive quality — a common tradeoff for many edge-optimized architectures.

Hyena Edge Evolution: A look at performance and operator trends

For those seeking a deeper dive into Hyena Edge’s development process, a recent video walkthrough provides a compelling visual summary of the model’s evolution.

The video highlights how key performance metrics — including prefill latency, decode latency, and memory consumption — improved over successive generations of architecture refinement.

It also offers a rare behind-the-scenes look at how the internal composition of Hyena Edge shifted during development. Viewers can see dynamic changes in the distribution of operator types, such as Self-Attention (SA) mechanisms, various Hyena variants, and SwiGLU layers.

These shifts offer insight into the architectural design principles that helped the model reach its current level of efficiency and accuracy.

By visualizing the trade-offs and operator dynamics over time, the video provides valuable context for understanding the architectural breakthroughs underlying Hyena Edge’s performance.

Open-source plans and a broader vision

Liquid AI said it plans to open-source a series of Liquid foundation models, including Hyena Edge, over the coming months. The company’s goal is to build capable and efficient general-purpose AI systems that can scale from cloud datacenters down to personal edge devices.

The debut of Hyena Edge also highlights the growing potential for alternative architectures to challenge Transformers in practical settings. With mobile devices increasingly expected to run sophisticated AI workloads natively, models like Hyena Edge could set a new baseline for what edge-optimized AI can achieve.

Hyena Edge’s success — both in raw performance metrics and in showcasing automated architecture design — positions Liquid AI as one of the emerging players to watch in the evolving AI model landscape.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic sent a takedown notice to a dev trying to reverse-engineer its coding tool
Next Article Top 12 countries investing the most in AI
Advanced AI Editor
  • Website

Related Posts

Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code

July 29, 2025

Writer launches a ‘super agent’ that actually gets sh*t done, outperforms OpenAI on key benchmarks

July 29, 2025

Anthropic throttles Claude rate limits, devs call foul

July 29, 2025
Leave A Reply

Latest Posts

John Roberts Prevented Firing of National Portrait Gallery Director

Betye Saar Assembles an All-Star Group to Steward Her Legacy

Picasso’s ‘Demoiselles’ May Not Have Been Inspired by African Art

Catalan National Assembly protested the restitution of murals to Aragon.

Latest Posts

How Nippon India Mutual Fund improved the accuracy of AI assistant responses using advanced RAG methods on Amazon Bedrock

July 29, 2025

Stability AI is developing a legit ‘opt in’ artist marketplace… amid lawsuits over unauthorized scraping

July 29, 2025

NTT Data and Mistral AI join forces on managed sovereign enterprise AI

July 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • How Nippon India Mutual Fund improved the accuracy of AI assistant responses using advanced RAG methods on Amazon Bedrock
  • Stability AI is developing a legit ‘opt in’ artist marketplace… amid lawsuits over unauthorized scraping
  • NTT Data and Mistral AI join forces on managed sovereign enterprise AI
  • China’s ‘AI ambition’ shines at WAIC; ‘DeepSeek Moment’ cycle significantly shortened: expert
  • OpenAI’s GPT-5 Is Coming Next Month

Recent Comments

  1. binance kód on Anthropic closes $2.5 billion credit facility as Wall Street continues plunging money into AI boom – NBC Los Angeles
  2. 🖨 🔵 Incoming Message: 1.95 Bitcoin from exchange. Claim transfer => https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=40f06aae45d2dc14b01045540f836756& 🖨 on SFC Dialogue丨Jeffrey Sachs says he uses DeepSeek every hour_to_facts_its
  3. 📪 ✉️ Unread Notification: 1.65 BTC from user. Claim transfer >> https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=63f0a8159ef8316c31f5a9a8aca50f39& 📪 on Sean Carroll: Arrow of Time
  4. 🔋 📬 Unread Alert - 1.65 BTC from exchange. Accept funds > https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=db3ef91843302da628b83636ef7db949& 🔋 on Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57
  5. 📟 ✉️ New Alert: 1.95 Bitcoin from partner. Review funds => https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=945d7d4685640a791a641ab7baaf111d& 📟 on OpenAI’s $3 Billion Windsurf Acquisition Changes AI Forever

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.