Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI makes us impotent

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Google demos its smartglasses and makes us hanker for the future
AI Tools & Product Releases

Google demos its smartglasses and makes us hanker for the future

Advanced AI BotBy Advanced AI BotApril 18, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Table of Contents

Table of Contents

Headset and glasses

When can we buy it?

Will it arrive too late? 

At a recent TED talk, Google’s exciting XR smartglasses were demonstrated to the public for the very first time. While we’ve seen the smartglasses before, it has always been in highly polished videos showcasing Project Astra, where we never get a true feel for the features and functionality in the real world. All that has now changed, and our first glimpse of the future is very exciting. However, future is very much the operative word. 

The demonstration of what the smartglasses can do takes up the majority of the 16-minute presentation, which is introduced by Google’s vice president of augmented and extended reality Shahram Izadi. He starts out with some background on the project, which features Android XR at its center, the operating system Google is building with Samsung. It brings Google Gemini to XR hardware such as headsets, smartglasses, and “form factors we haven’t even dreamed of yet.”

A pair of smartglasses are used for the demonstration. The design is bold, in that the frames are polished black and “heavy,” much like the Ray-Ban Meta smartglasses. They feature a camera, speaker, and a microphone for the AI to see and hear what’s going on around you, and through a link with your phone you’ll be able to make and receive calls. Where they separate from Ray-Ban Meta is with the addition of a tiny color in-lens display.

Headset and glasses

A screenshot from Google's TED Talk on its smartglasses.
TED

What makes the Android XR smartglasses initially stand out in the demo is Gemini’s ability to remember what it has “seen,” and it correctly recalls the title of a book the wearer glanced at, and even noted where a hotel keycard had been left. This short-term memory has a wide range of uses, not just as a memory jogger, but as a way to confirm details and better organize time too. 

The AI vision is also used to explain a diagram in a book, and translate text into different languages. It also directly translates spoken languages in real-time. The screen is brought into action when Gemini is asked to navigate to a local beauty spot, where directions are shown on the lens. Gemini reacts quickly to its instructions, and everything appears to work seamlessly during the live demonstration.

The Project Moohan headset.
Google

Following the smartglasses, Android XR is then shown working on a full headset. The visual experience recalls that of Apple’s Vision Pro headset, with multiple windows shown in front of the wearer and pinch-gestures used to control what’s happening. However, Gemini was the key to using the Android XR headset, with the demonstration showing the AI’s ability to describe and explain what’s being seen or shown in a highly conversational manner. 

When can we buy it?

A person holding the Ray-Ban Meta and Solos AirGo 3 smart glasses.
Ray-Ban Meta (top) and Solos AirGo 3 Andy Boxall / Digital Trends

Izadi closed the presentation saying, “We’re entering an exciting new phase of the computing revolution. Headsets and glasses are just the beginning. All this points to a single vision of the future, a world where helpful AI will converge with lightweight XR. XR devices will become increasingly more wearable, giving us instant access to information. While AI is going to become more contextually aware, more conversational, more personalized, working with us on our terms and in our language. We’re no longer augmenting our reality, but rather augmenting our intelligence.”

It’s tantalizing stuff, and for anyone who saw the potential in Google Glass and have already been enjoying Ray-Ban Meta, the smartglasses in particular certainly appear to be the desirable  next step in the evolution of everyday smart eyewear. However, the emphasis should be on the future, as while the glasses appeared to be almost ready for public release, it may not be the case at all, as Google continues the seemingly endless tease of its smart eyewear.

Izadi didn’t talk about a release date for either XR device during the TED Talk, which isn’t a good sign, so when are they likely to be real products we can buy? The smartglasses demonstrated are said to be a further collaboration between Google and Samsung — the headset is also made by Samsung — and are not expected to launch until 2026, according to a report from The Korean Economic Daily, which extends the possible launch date beyond the end of 2025 as previously rumored. While this may seem a long time away, it’s actually closer in time than the consumer version of Meta’s Orion smartglasses, which aren’t expected to hit stores until late 2027. 

Will it arrive too late? 

A screenshot from a Google video showing its smartglasses in action.
Google

Considering the smartglasses shown during the TED Talk seem to bring together aspects of Glass, Ray-Ban Meta, and smartglasses such as those from Halliday, plus the Google Gemini assistant we already use on our phones and computers now, the continued lengthy wait is surprising and frustrating. 

Worse, the overload of hardware using AI, plus the many Ray-Ban Meta copies and alternatives expected between now and the end of 2026 means Google and Samsung’s effort is at risk of becoming old news, or eventually releasing to an incredibly jaded public. The Android XR headset, known as Project Moohan, is likely to launch in 2025.

Perhaps we’re just being impatient, but when we see a demo featuring a product that looks so final, and tantalizing, it’s hard not to want it in our hands (or on our faces) sooner than some time next year. 






Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDemystifying C3.ai: Insights From 8 Analyst Reviews – C3.ai (NYSE:AI)
Next Article Building a foundation with AI to jumpstart your journalism
Advanced AI Bot
  • Website

Related Posts

AI Promises More Efficiency But Ethical Considerations Should Come First · Dataetisk Tænkehandletank

June 8, 2025

Microsoft Edge is getting new media control center, AI-powered history search, and more

June 8, 2025

Samsung to adopt AI coding assistant to boost developer productivity

June 8, 2025
Leave A Reply Cancel Reply

Latest Posts

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Latest Posts

AI makes us impotent

June 8, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 8, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.