Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

Nvidia takes $4.5bn hit due to export restrictions

Paper page – VerIF: Verification Engineering for Reinforcement Learning in Instruction Following

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Qualcomm shares its vision for the future of smart glasses with on-glass Gen AI
VentureBeat AI

Qualcomm shares its vision for the future of smart glasses with on-glass Gen AI

Advanced AI BotBy Advanced AI BotJune 11, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Qualcomm has enabled what one of its executives said was a strange and “most interesting conversations– and it was with a pair of generative AI-powered smart glasses.”

In a talk at Augmented World Expo, Ziad Asghar, senior vice president of XR & spatial computing at Qualcomm, said that the chat wasn’t just a simple demo. It was a glimpse into how we’re turning AI glasses, which have long been considered an accessory, into a standalone, comprehensive capable device.

The company also unveiled its Snapdragon AR1+ Gen 1 processor, which is 26% smaller than the previous generations, to power the demo.

“On Tuesday, as I stood on stage at AWE USA, the world’s largest XR conference, I chatted with an AI assistant through a pair of RayNeo X3 Pro smart glasses powered by Snapdragon technology, – with AI inferencing done on the glasses without relying on the cloud or an internet connection,” said Ziad Asghar, senior vice president of XR & spatial computing at Qualcomm, in a statement.

Qualocomm has launched a lot of XR glasses in the past year.

He said the premise is simple: AI glasses are set to operate independently without needing to be paired with a smartphone or the cloud.

“In the near future, I will be able to leave my phone in my pocket or in the car and just wear my smart glasses during a supermarket run, as I showed off during my AWE demo,” Asghar said. “While on stage, I was at the ‘supermarket’ and asked my glasses for help with fettuccine alfredo I needed to make for my daughter’s birthday party.”

In response, the AI assistant, running Llama 1B, a small language model (SLM), understood the specific request and provided him with the information he needed through audio and text displayed in the lens of his glasses.

This demonstration was a world’s first: an Autoregressive Generative AI model running completely on a pair of smart glasses. No phone. No cloud. Just the processor powering the glasses themselves. And this industry milestone was pulled off in front of a live audience, he said.

Topping this off was the announcement of our Snapdragon AR1+ Gen 1 processor, which is 26% smaller than the previous generations and brings enhanced image quality, size, power improvement and the ability to run SLMs. All four of these traits are critical for compact smart glasses.

Together, they open the door to a revolution in AI smart glasses, with thinner, lighter and more varied glass designs paired with enough power to run AI assistants right on the device.

So, while the demo was just one example of what you can do with completely on-device AI on smart glasses, the benefits that stem from the work going on at Qualcomm, are long-lasting and massive.

Expand and evolve

Qualcomm’s announcments.

There isn’t one path that XR headsets and smart glasses will take, especially since we also offer mixed reality processors such as Snapdragon XR2 and Snapdragon XR2+ that also have significant inferencing capability on the device.

Asghar said he anticipates several different form factors, from standalone glasses powerful enough to run AI models themselves, to more lightweight frames linked to phones or nearby small computing “pucks” that can link anything from a car to a tablet. What Qualcomm is doing with the portfolio is getting ready for the future.

Whether it’s cloud computing, on-device, or a hybrid path that incorporates both, the boost in on-device AI capabilities will offer a seamless and ultra-low latency user experience that’s also security-focused. That will be critical as AI-powered smart glasses find their way into sectors with mission-critical needs and users demand more personalization, more privacy features, and an end-to-end agentic experience.

“We’ve already seen significant momentum in the XR industry over the last year. In December, we collaborated with Google and Samsung to launch Android XR, an operating system designed with AI at the core of the XR experience,” he said.

This comes as the industry continues to expand, with Meta’s Ray-Ban Glasses, as well as more ambitious hardware such as Meta Orion, which it bills as the company’s first true augmented reality glasses with their own digital overlay.

In addition, Asghar said we’ve seen glasses from Rokid, RayNeo, XREAL and more. In March, BleeqUp launched a pair of AI-powered sports glasses.

Imagine what these companies will be able to accomplish with smaller, more powerful platforms like Snapdragon AR1+ Gen 1, enabling sleeker form factors that don’t compromise on the ability to run AI models, Asghar said.

Smarter, more aware

Qualcomm is blending XR and AI.

While getting smart glasses down to a reasonable size and fit is critical, another advance that Snapdragon AR1+ Gen 1 brings is camera capabilities typically found in premium smartphones, which is equally important to where they’re evolving.

That ability to see the world that you see – with every minute detail – will open new avenues of multimodal inputs. That capability is critical for AI to not only better understand what you see, but also connect the dots in a way that lets it proactively offer suggestions or additional context on an object or location.

And while smart glasses will be able to run SLMs on their own, that doesn’t mean they won’t work in tandem with a constellation of devices around you, whether it’s your smartphone or PC. In fact, I see smart watches and new devices such as smart rings or other wearable sensors that will enable new modalities of input as they work in concert with your glasses.

At Qualcomm Technologies, we are preparing for a multifaceted future with a wide range of device combinations by creating a modular architecture that allows our partners to tap into the spatial computing industry to deliver a superior experience to consumers.

That’s why I know this conversation with my smart glasses’ AI assistant is such a pivotal moment – it really marked the beginning of something huge. The work we’re doing is just starting to unlock the game-changing potential of a deeper and more personalized agentic experience.

“A world’s first on-glass Gen AI demonstration: Qualcomm’s vision for the future of smart glasses,” said Qualcomm. “Our live demonstration of a generative AI assistant running completely on smart glasses – without the aid of a phone or the cloud — and the reveal of the new Snapdragon AR1+ platform spark new possibilities for augmented reality.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMarks & Spencer restores some online-order operations following cyberattack
Next Article AI disruption rises, VC optimism cools in H1 2025
Advanced AI Bot
  • Website

Related Posts

Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

June 13, 2025

Red Team AI now to build safer, smarter models tomorrow

June 13, 2025

TensorWave deploys AMD Instinct MI355X GPUs in its cloud platform

June 13, 2025
Leave A Reply Cancel Reply

Latest Posts

10 Best Cities for Artists and Art Lovers

New York to Get New Space for Video, Sound, and Performance Art

Two-Thirds of the National Endowment for the Humanities Staff Laid Off

Enchanting El Museo Del Barrio Gala Honors Late Artist And Arts Patron Tony Bechara

Latest Posts

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

June 13, 2025

Nvidia takes $4.5bn hit due to export restrictions

June 13, 2025

Paper page – VerIF: Verification Engineering for Reinforcement Learning in Instruction Following

June 13, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.