Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Scaling smarter: How enterprise IT teams can right-size their compute for AI

Generating Tangle Patterns With Grammars | Two Minute Papers #102

Nuts and Bolts of Applying Deep Learning (Andrew Ng)

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Voice/Audio Generation

Researchers Use AI To Turn Sound Recordings Into Accurate Street Images – UT Austin News

Advanced AI EditorBy Advanced AI EditorNovember 27, 2024No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AUSTIN, Texas — Using generative artificial intelligence, a team of researchers at The University of Texas at Austin has converted sounds from audio recordings into street-view images. The visual accuracy of these generated images demonstrates that machines can replicate human connection between audio and visual perception of environments.

In a paper published in Computers, Environment and Urban Systems, the research team describes training a soundscape-to-image AI model using audio and visual data gathered from a variety of urban and rural streetscapes and then using that model to generate images from audio recordings.

“Our study found that acoustic environments contain enough visual cues to generate highly recognizable streetscape images that accurately depict different places,” said Yuhao Kang, assistant professor of geography and the environment at UT and co-author of the study. “This means we can convert the acoustic environments into vivid visual representations, effectively translating sounds into sights.”

Using YouTube video and audio from cities in North America, Asia and Europe, the team created pairs of 10-second audio clips and image stills from the various locations and used them to train an AI model that could produce high-resolution images from audio input. They then compared AI sound-to-image creations made from 100 audio clips to their respective real-world photos, using both human and computer evaluations. Computer evaluations compared the relative proportions of greenery, building and sky between source and generated images, whereas human judges were asked to correctly match one of three generated images to an audio sample.

The results showed strong correlations in the proportions of sky and greenery between generated and real-world images and a slightly lesser correlation in building proportions. And human participants averaged 80% accuracy in selecting the generated images that corresponded to source audio samples.

“Traditionally, the ability to envision a scene from sounds is a uniquely human capability, reflecting our deep sensory connection with the environment. Our use of advanced AI techniques supported by large language models (LLMs) demonstrates that machines have the potential to approximate this human sensory experience,” Kang said. “This suggests that AI can extend beyond mere recognition of physical surroundings to potentially enrich our understanding of human subjective experiences at different places.”

In addition to approximating the proportions of sky, greenery and buildings, the generated images often maintained the architectural styles and distances between objects of their real-world image counterparts, as well as accurately reflecting whether soundscapes were recorded during sunny, cloudy or nighttime lighting conditions. The authors note that lighting information might come from variations in activity in the soundscapes. For example, traffic sounds or the chirping of nocturnal insects could reveal time of day. Such observations further the understanding of how multisensory factors contribute to our experience of a place.

“When you close your eyes and listen, the sounds around you paint pictures in your mind,” Kang said. “For instance, the distant hum of traffic becomes a bustling cityscape, while the gentle rustle of leaves ushers you into a serene forest. Each sound weaves a vivid tapestry of scenes, as if by magic, in the theater of your imagination.”
Kang’s work focuses on using geospatial AI to study the interaction of humans with their environments. In another recent paper published in Nature, he and his co-authors examined the potential of AI to capture the characteristics that give cities their unique identities.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNvidia unveils Fugatto: A new AI generator that can make sounds never heard before
Next Article Kendall Allison Clark Obituary November 27, 2024
Advanced AI Editor
  • Website

Related Posts

SiriusXM and Narrativ Partner on AI Voice Replication for Audio Ads

June 17, 2025

Sirius XM Holdings Inc. (SIRI)

June 17, 2025

Air India plane crash: investigators to examine pilots’ voice recordings | Air India Ahmedabad plane crash

June 16, 2025
Leave A Reply Cancel Reply

Latest Posts

How Labubu Dolls Became 2025’s Viral Fashion Trend

Why Is That Revealing Photograph of Lorde Going Viral?

Vancouver Art Gallery Lays Off 30 Unionized Employees

Gold TV of Trump Dancing Appears on National Mall in Latest Protest Art

Latest Posts

Scaling smarter: How enterprise IT teams can right-size their compute for AI

June 28, 2025

Generating Tangle Patterns With Grammars | Two Minute Papers #102

June 28, 2025

Nuts and Bolts of Applying Deep Learning (Andrew Ng)

June 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Scaling smarter: How enterprise IT teams can right-size their compute for AI
  • Generating Tangle Patterns With Grammars | Two Minute Papers #102
  • Nuts and Bolts of Applying Deep Learning (Andrew Ng)
  • QBTS vs. IBM: Why is D-Wave Quantum a Smarter Buy Right Now? – June 25, 2025
  • Paper page – PhysRig: Differentiable Physics-Based Skinning and Rigging Framework for Realistic Articulated Object Modeling

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.