Sonos has launched a new version of its Speech Enhancement tools for the Sonos Arc Ultra, which we rate as one of the best soundbars available.
You’ll still find these tools on the Now Playing screen in the Sonos app, but instead of having just a couple of options, you’ll now have four new modes (Low, Medium, High and Max), all powered by the company’s first use of an AI sound-processing tool. They should be available today (May 12th) to all users.
These modes were developed in a year-long partnership with the Royal National Institute for the Deaf (RNID), the UK’s leading charity for people with hearing loss. I spoke to Sonos and the RNID to get the inside story on its development here – but you can read on here for more of the details.
You may like
The update launches today on Sonos Arc Ultra soundbars, but won’t be available on any other Sonos soundbars because it requires a higher level of processing power, which the chip inside the Arc Ultra can provide, but the older soundbars can’t.
The AI element is used to analyze the sound passing through the soundbar in real time, and separate out the ‘speech’ elements from the sound so they can be made more prominent in the mix without affecting the rest of the sound too much. I’ve heard it in action during a demo at Sonos’ UK product development facility, and it’s very impressive.
If you’ve used speech enhancement tools before, you’re probably familiar with hearing the dynamic range of the sound, and especially the bass, suddenly get massively reduced in exchange for the speech elements getting pushed more forward.
That’s not the case with Sonos’ new mode – powerful bass, the overall soundscape, and the more immersive Dolby Atmos elements are all maintained far better. That’s for two reasons: one is that the speech is being enhanced separately to other parts, and the other is that it’s a dynamic system that only activates when it detects that speech is likely to be drowned out by background noise.
It won’t activate if dialogue is happening against a quiet background, or if there’s no dialogue in the scene. And it’s a system that works by degrees – it applies more processing in the busiest scenes, and less when the audio is not as chaotic.
How does it sound?
On the two lowest modes, dialogue is picked out more clearly with no major harm to the rest of the soundtrack, based on my demo.
On the High mode, the background was still maintained really well, but the speech started to sound a little more processed, and on Max I could hear the background getting its wings clipped a little, and some more artificiality to the speech – but the speech was extremely well picked out, and this mode is only really designed for the hard of hearing.
I mentioned that the mode was developed with the RNID, which involved Sonos consulting with sound research experts at the RNID, but also getting people with different types and levels of hearing loss to test the modes at different stages of development and provide feedback.
I spoke at length to the Sonos audio and AI architects who developed the new modes, as well as the RNID, but the key takeaway is that the collaboration led to Sonos putting more emphasis on retaining the immersive sound effects, and adding four levels of enhancement instead of the originally planned three.
Despite the RNID’s involvement, the new mode isn’t designed to be solely for the hard of hearing. It’s still just called Speech Enhancement, as it is now, and it’s not hidden away like an accessibility tool – sound is improved for everyone, and ‘everyone’ now better includes people with mild to moderate hearing loss. The Low and Medium modes can also just function for those of us who need a bit of extra clarity in busy scenes.
This isn’t the first use of AI-powered speech separation I’ve seen – I’ve experienced it on Samsung TVs, and in a fun showcase from Philips TVs, where it was used to disable the commentary during sports but preserve the crowd sounds.
@techradar
♬ original sound – TechRadar
But it’s interesting that this is the first use of AI sound processing from Sonos, and the four-year development process, including a year of refinement with the RNID, shows that Sonos has taken a thoughtful approach to how it’s best used that isn’t always apparent in other AI sound processing applications. Here’s my piece interviewing Sonos’ AI and audio developers with researchers from the RNID.
It’s just a shame that it’s exclusive to the Sonos Arc Ultra for now – though I’m sure that new versions of the Sonos Ray and Sonos Beam Gen 2 will be along before too long with the same upgraded chip to support the feature.
Today’s best Sonos Arc Ultra deals