Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Meta’s Llama AI Team Suffers Talent Exodus As Top Researchers Join $2B Mistral AI, Backed By Andreessen Horowitz And Salesforce

Reddit Sues Anthropic for Scraping Content to Train Claude AI

Google DeepMind’s Demis Hassabis Wants to Build AI Email Assistant That Can Reply in Your Style: Report

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » The opportunity at home – can AI drive innovation in personal assistant devices and sign language?
Microsoft AI

The opportunity at home – can AI drive innovation in personal assistant devices and sign language?

Advanced AI BotBy Advanced AI BotMarch 31, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Advancing tech innovation and combating the data dessert that exists related to sign language have been areas of focus for the AI for Accessibility program. Towards those goals, in 2019 the team hosted a sign language workshop, soliciting applications from top researchers in the field. Abraham Glasser, a Ph.D. student in Computing and Information Sciences and a native American Sign Language (ASL) signer, supervised by Professor Matt Huenerfauth, was awarded a three-year grant. His work would focus on a very pragmatic need and opportunity: driving inclusion by concentrating on and improving common interactions with home-based smart assistants for people who use sign language as a primary form of communication. 

Since then, faculty and students in the Golisano College of Computing and Information Sciences at Rochester Institute of Technology (RIT) conducted the work at the Center for Accessibility and Inclusion Research (CAIR). CAIR publishes research on computing accessibility and it includes many Deaf and Hard of Hearing (DHH) students operating bilingually in English and American Sign Language. 

To begin this research, the team investigated how DHH users would optimally prefer to interact with their personal assistant devices, be it a smart speaker other type of devices in the household that respond to spoken command. Traditionally, these devices have used voice-based interaction, and as technology evolved, newer models now incorporate cameras and display screens. Currently, none of the available devices on the market understand commands in ASL or other sign languages, so introducing that capability is an important future tech development to address an untapped customer base and drive inclusion. Abraham explored simulated scenarios in which, through the camera on the device, the tech would be able to watch the signing of a user, process their request, and display the output result on the screen of the device.  

Some prior research had focused on the phases of interacting with a personal assistant device, but little included DHH users. Some examples of available research included studying device activation, including the concerns of waking up a device, as well as device output modalities in the form for videos, ASL avatars and English captions. The call to action from a research perspective included collecting more data, the key bottleneck, for sign language technologies.  

To pave the way forward for technological advancements it was critical to understand what DHH users would like the interaction with the devices to look like and what type of commands they would like to issue. Abraham and the team set up a Wizard-of-Oz videoconferencing setup. A “wizard” ASL interpreter had a home personal assistant device in the room with them, joining the call without being seen on camera. The device’s screen and output would be viewable in the call’s video window and each participant was guided by a research moderator. As the Deaf participants signed to the personal home device, they did not know that the ASL interpreter was voicing the commands in spoken English. A team of annotators watched the recording, identifying key segments of the videos, and transcribing each command into English and ASL gloss. 

Abraham was able to identify new ways that users would interact with the device, such as “wake-up” commands which were not captured in previous research. 

Six photographs of video screenshots of ASL signers who are looking into the video camera while they are in various home settings. The individuals shown in the video are young adults of a variety of demographic backgrounds, and each person is producing an ASL sign.
Screenshots of various “wake up” signs produced by participants during the study conducted remotely by researchers from the Rochester Institute of Technology.  Participants were interacting with a personal assistant device, using American Sign Language (ASL) commands which were translated by an unseen ASL interpreter, and they spontaneously used a variety of ASL signs to activate the personal assistant device before giving each command.  The signs here include examples labeled as: (a) HELLO, (b) HEY, (c) HI, (d) CURIOUS, (e) DO-DO, and (f) A-L-E-X-A.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAn Earthquake Damages Cultural Heritage Sites in Myanmar and Thailand
Next Article Runway’s New Gen-4 AI System Promises the Most Predictable Media Creation Yet
Advanced AI Bot
  • Website

Related Posts

Singapore develops Asia’s first AI-based mobile app for shark and ray fin identification to combat illegal wildlife trade – Singapore News Center

March 31, 2025

Microsoft’s framework for building AI systems responsibly

March 31, 2025

AI-Mimi is building inclusive TV experiences for Deaf and Hard of Hearing user in Japan

March 30, 2025
Leave A Reply Cancel Reply

Latest Posts

Casa Sanlorenzo Anchors New Arts And Culture Venue In Venice

Collector Hoping Elon Musk Buys Napoleon Collection

How Former Apple Music Mastermind Larry Jackson Signed Mariah Carey To His $400 Million Startup

Meet These Under-25 Climate Entrepreneurs

Latest Posts

Meta’s Llama AI Team Suffers Talent Exodus As Top Researchers Join $2B Mistral AI, Backed By Andreessen Horowitz And Salesforce

June 6, 2025

Reddit Sues Anthropic for Scraping Content to Train Claude AI

June 6, 2025

Google DeepMind’s Demis Hassabis Wants to Build AI Email Assistant That Can Reply in Your Style: Report

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.