Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

Trump’s Tech Sanctions To Empower China, Betray America

Paper page – MARBLE: Material Recomposition and Blending in CLIP-Space

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Sounds Daily – trialling generative AI & synthetic voices to deliver personalised audio streams
Voice/Audio Generation

Sounds Daily – trialling generative AI & synthetic voices to deliver personalised audio streams

Advanced AI BotBy Advanced AI BotApril 3, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Published: 4 September 2024

The front door closes and you walk towards your vehicle. Opening the door and settling in for another journey, you place your phone in its cradle. You think of today’s to-do list, the groceries, the journey to see family or friends. You tap Sounds Daily to be greeted by a friendly, but not quite human voice welcoming you to “The best of BBC Sounds – made just for you”.

The voice introduces a few programmes – some old favourites and some new unfamiliar shows. You trust Sounds Daily will choose programmes you love, in an order you’ll enjoy. If the selection is not quite right, you know you can easily skip or you refresh the whole stream with a completely new selection of programmes.

It is all done for you with no need to swipe or search for what to listen to first thing in the morning.

Over the last 9 months, BBC Research & Development’s Sound Lab team have been focussing on the idea of a personalised curated stream of audio content for in-car journeys like this.

Sound Lab is one of our Innovation Labs, bridging the gap between BBC R&D and BBC Sounds. The innovation agenda for Sound Lab puts flexible media capabilities front and centre, seeking opportunities against BBC Sounds current goals and working together on tooling and audience experiences.

Work on in-car experiences is not new to the BBC, there’s been significant research into audience behaviour in the car. From the 1990s when R&D were developing DAB (Digital Audio Broadcasting), to R&D’s work prototyping for voice UI in 2016 and recently R&D focussed on exploring flexible content and data points that could create highly personalised experiences while driving. These could include highlighting issues with the car (low fuel) or flagging imminent appointments from your calendar. This work used a new method of participatory research, including people panels, which gave us insight into audience actions and interactions in the car. We used this work, plus the BBC’s audience research as a platform to think how might we bring these experiences and insights closer to where we are now with current technology and BBC systems for a near term solution.

Sounds Daily aims to tackle R&D’s ambitious goal of giving flexible segmented and personalised content to users on their journey. Sound Lab was the perfect place to look at these opportunities since BBC Sounds is the place for in car entertainment from the BBC.

Car gearstick and mobile phone

So what is Sounds Daily? It’s a personalised content stream that reorganises short & long form content for each listener at scale, based on their listening habits, while in the car and on the move.

It uses generative AI to query metadata and create scripts that introduce content and signpost what is coming up.

We know the morning commute is still a key part of the daily use of the car and that average journeys in the UK last 16 minutes. That’s not much time for a personalised experiment. We started with the morning commute as traditionally, that is consistently the peak listening part of the day. It’s when most people head to work, drop the kids off at school or generally start their day on the move. During this time, we wanted to understand if we were meeting the needs of our audience in the era of streaming and the changing world of connected cars where there are considerably more options to choose from than the built-in DAB radio. Now there are many more screens and apps vying for our attention, and the prominence of your brand or app within the car entertainment system will inevitably dictate your success. Our ambition for the project was to make a distraction-free, one-click, personalised listening experience that understands what your listening habits are and serve you the right content, at the right moment, similar to turning on your favourite radio station. This understandably required a flexible media approach rearranging content depending on what you want at the time.

As a way of connecting the content, we looked at the use of generative AI and synthetic media. Presenting thousands, if not millions, of pieces of content together, personalised for every individual user of the stream, at scale, is not possible for a human. This was an exercise to see how audiences reacted and interacted with synthetic media and aggregated and summarised scripts to seamlessly join content instead of a podcast clip and a news bulletin jarring together, for example. Our approach used GPT4, with guardrails around BBC metadata and other IP, to generate scripts and segues, introducing the personalised stream. More about this in our upcoming blog focussed on the technical parameters.

A car interior showing a dashboard with the entertainment system centred.

For this experiment, we focussed on a person driving on their own in the car. We all know the concessions we make to our listening habits when travelling in a car with friends and family, so for this experiment, we concentrated on the individual use.

We built this experiment in the Sounds Sandbox; a mirrored copy of BBC Sounds, only it’s set apart from the live product. This allows us to experiment freely while not interfering with the current live state that’s used daily by millions of people. It also means audiences see the experiments in the known surroundings of BBC Sounds, making it easier to navigate.

The aim was to understand if audiences want a personalised stream that plays out what they want at the time they want it. While testing this, we also thought about how to get the user the best possible stream that matches their tastes in that moment, the type of content (topic as well as type: Cricket as well as Sport) or type of journey. An example might be that on Mondays I don’t want to start my day with the news, but on Tuesdays I do.

Before the trial, we asked participants to complete a survey to give us more information about what topics of content they liked to listen to. We also had access to 6 months of their listening data from BBC Sounds to understand their habits. This information helped us to form a baseline to test the stream against each person, every time they used the experience. We integrated tools from teams across the BBC, such as R&D’s flexible media tool StoryFormer or BBC Sounds’ universal recommendations engine. This was more efficient, but it also means Sounds Daily takes advantage of already established BBC systems. This is something often disregarded in experimentation of this nature, as it can mean slowing down the experiment. For Sound Lab, we want to bring experiments as close to normal workflows as possible so that the route to adoption can be made easier.

The project leaned on a multi skilled team that adjusted when skills were required from editorial, producers, researchers, developers and UX designers. We were able to use the knowledge and expertise of those working directly on BBC Sounds for advice and problem solving when needed, but also get invaluable insight into audience interactions and editorial workflows.

Sounds Daily was trialled with 80 participants over 3 weeks in-car on the morning commute, earlier this year. Learn more about the experience, what we learnt from editorial workflows and tooling, and not least, the insights from our trial participants in the forthcoming parts of this blog series.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article[2102.10717] Abstraction and Analogy-Making in Artificial Intelligence
Next Article Hiring for Skill-Based Culture | Recruiting News Network
Advanced AI Bot
  • Website

Related Posts

Supriya Sule AI Voice Row: Ajit Pawar ‘identifies’ Supriya Sule’s voice, she says AI-generated | India News

June 3, 2025

How to protect yourself from AI voice cloning scams, deepfakes

May 29, 2025

‘Stop using my voice’ – ScotRail’s new announcer is my AI clone

May 27, 2025
Leave A Reply Cancel Reply

Latest Posts

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Viral Trump Vs. Musk Feud Ignites A Meme Chain Reaction

UK Art Dealer Sentenced To 2.5 Years In Jail For Selling Art to Suspected Hezbollah Financier

Latest Posts

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

June 7, 2025

Trump’s Tech Sanctions To Empower China, Betray America

June 7, 2025

Paper page – MARBLE: Material Recomposition and Blending in CLIP-Space

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.