feature article
Subscribe Now

PZ Progress for Sound Production

A USound Update a Year Later

We’ve covered sound as a general topic quite a few times in these pages. When it comes to MEMS, however, most of that discussion has been around microphones. MEMS microphones have been a thing for a long time, even though advances continue.

What’s been notably missing until a year ago has been MEMS used for sound production instead of sound detection. Effectively, it’s MEMs as sound actuator rather than as sound sensor. It was USound that we covered last year, and I got a chance to talk to them again at last fall’s MEMS and Sensors Executive Congress. While their basic story – that of using MEMS for creating sound – hasn’t changed, they had some updates and some new platforms for demonstrating – as well as for selling – their technology.

How Does It Sound, Bud?

You may recall from last year that they were talking about earbuds that used a single driver for the entire frequency range. That includes bass, and it’s hard to imagine a tiny MEMS membrane (memsbrane?) pushing around enough air to do low frequencies any justice.

Well, this time I got a chance to listen to them. And, honestly, they sound pretty good. But, also honestly, it’s not a “wow!” thing – unless you know that it’s a tiny membrane making all that noise. So if a consumer were to listen, they probably wouldn’t be blown away – because they don’t know (or care, really) what’s inside the earbud.

So this makes the technology, in my mind, more of a sell to equipment makers than to actual consumers. It allows them to make speakers that do what consumers expect in a way that can impact size or reliability or cost – things that matter, but that don’t automatically translate into different or better sound. So, in fact, consumers might notice a difference – in price, size, or some other feature – just not so much in sound.

This isn’t a diss against the technology; it’s simply my response to having heard them. Of course, the only reason that they can do this with the tiny membrane – which can’t push a lot of air around – is because the earbud is inside your ear canal, and there’s not a lot of air to push around there.

Be Free, Field!

The next application they discussed was on headphones, not earbuds. And they’re considered free-field in that they’re not just pushing the air around in the confines of the ear canal. So the bass frequencies aren’t going to be implemented as well if they want to rely only on that one driver.

As a result, these headphones use standard electrodynamic woofers; they don’t use their MEMS technology for the low notes. But, mounted around the woofer are multiple tweeters that can create the effect of 3D immersive sound. Each of the tweeters creates a slightly different sound – which is a function not of the tweeters themselves, but of the sound processing that sends signals to each of the tweeters.

Done properly, you get what they call sound externalization, which can give the impression that the sound is coming from somewhere outside in front of you or behind you even though the sound is being produced right over your ear.

So Long, Obnoxious Sound Leakage

Then there’s everyone’s favorite (not!) sensation of being in some public place and getting bits and pieces of the earphone sounds from everyone around you. USound has a solution for this, although not with earphones, but with VR glasses. And it’s not a solution for blotting out everyone else’s noises, but rather for being a good citizen and not littering everyone else’s soundscape with your personal sound experience.

We talked last year about how the USound response is fast enough to allow for non-periodic sound cancellation. They’ve leveraged this in the glasses, placing a microphone behind the ear on the stem. It generates the opposite of the sound being produced in the earpiece itself. This keeps the sound from traveling from the ear to anyone standing behind you.

Of course, this particular version of sound cancellation is probably somewhat easier than your average noise cancellation. In order to cancel unknown and unwanted sounds around you, you first have to detect them with a microphone and then do whatever processing is necessary to invert the signal and add that to your soundstream. In the AR/VR glasses case, you’re not cancelling outside sounds; you’re cancelling the very sounds you’re also creating. So the processing used to create the sound for the main tweeters and woofer can simultaneously be used to create the cancellation signal, meaning it will probably be faster and more accurate than a similar application canceling outside noise of a different origin.

Commercial Steps Forward

Finally, they’ve moved forward along a number of tracks to further enable them to do the rollicking business they’d like to do.

  • In order to reduce the overall power of their solution, they’ve created their own ASIC instead of using some other processing element. Their speakers are still piezoelectric, and they still need high voltages, but they can manage it better with their new ASIC – primarily by using less power – than was the case before.
  • They changed their foundry strategy, moving to STMicroelectronics as their source. You might wonder why this might be important, since, really, who cares who builds it? Well, apparently, their prospective customers care. They want to be comfortable that their supply will come from a foundry that has demonstrated an ability to produce high volumes reliably.
  • They’re moving to a subsystem sell rather than a MEMS chip sell. This seems consistent with so much other new technology: rather than having to teach the world how to do it, just do it yourself and sell the solution. It’s more work for USound, but less work for their customers.

We’ll continue to keep an eye on this space as developments warrant.

 

More info:

USound

One thought on “PZ Progress for Sound Production”

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Ultra-low Power Fuel Gauging for Rechargeable Embedded Devices
Fuel gauging is a critical component of today’s rechargeable embedded devices. In this episode of Chalk Talk, Amelia Dalton and Robin Saltnes of Nordic Semiconductor explore the variety of benefits that Nordic Semiconductor’s nPM1300 PMIC brings to rechargeable embedded devices, the details of the fuel gauge system at the heart of this solution, and the five easy steps that you can take to implement this solution into your next embedded design.
May 8, 2024
11,302 views