editor's blog
Subscribe Now

A Microphone for Gestures and Canines

A while back, when looking at Elliptic Labs ultrasonic gesture recognition, we mentioned that they were able to do this based on the fact that Knowles microphones worked in the ultrasonic range. But they weren’t willing to say much more about the microphones.

2015-01-07_10_13_31-SPU0410LR5H.pdf_-_Adobe_Reader.pngSo I checked with Knowles; they had announced their ultrasonic microphone back in June. My first question was whether this was just a tweak of the filters or if it was a completely new sensor. And the answer: the MEMS is the same as the one used for their regular audio microphones; they’ve changed the accompanying ASIC. The packaging is also the same.

The next obvious question is, what is this good for, other than gesture recognition? Things got a bit quieter there – apparently there are some use cases being explored, but they can’t talk about them. So we’ll have to watch for those.

But with respect to the gesture thing, it turns out that, in theory, this can replace the proximity sensor. It’s low enough power that the mic can be operated “always on.” Not only can it detect that something is nearby, in the manner of a proximity sensor, it can go it one better: it can identify what that item is.

From a bill-of-materials (BOM) standpoint, at present you still need to use a separate ultrasonic transmitter, so you’re replacing one component (the proximity detector) with another (the transmitter). But in the future, the speakers could be leveraged, eliminating the transmitter.

It occurred to me, however, that, for this to become a thing, the ultrasonic detection will really need to be abstracted at the OS (or some higher) level, separating it from the regular audio stream. The way things are now, if you plugged a headset into the phone or computer, all the audio gets shunted to the headset, including the ultrasonic signal. Which probably isn’t useful unless you’re trying to teach your dog to use the phone (hey, they’re that intuitive!).

For this really to work, only the audible component should be sent to the headset; the ultrasonic signal and its detection would need to stay in the built-in speaker/mic pair to enable gesture recognition. Same thing when plugging in external speakers.

I’m sure that’s technically doable, although it probably disturbs a part of the system that’s been fixed for years. Which is never fun to dig into. But sometimes you’ve just got to grit your teeth and shed some of the legacy hardware in order to move forward.

You can find out more about Knowles’ ultrasonic microphone here.

 

[Editor’s note: For anyone clicking in through LinkedIn, I changed the title. It was supposed to be light, but, too late, I realized it could be taken as negative, which wasn’t the intent.]

(Image courtesy Knowles)

Leave a Reply

featured blogs
Oct 27, 2020
As we continue this blog series, we'€™re going to keep looking at System Design and Verification Online Training courses. In Part 1 , we went over Verilog language and application, Xcelium simulator,... [[ Click on the title to access the full blog on the Cadence Community...
Oct 27, 2020
Back in January 2020, we rolled out a new experience for component data for our discrete wire products. This update has been very well received. In that blog post, we promised some version 2 updates that would better organize the new data. With this post, we’re happy to...
Oct 26, 2020
Do you have a gadget or gizmo that uses sensors in an ingenious or frivolous way? If so, claim your 15 minutes of fame at the virtual Sensors Innovation Fall Week event....
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...

featured video

Better PPA with Innovus Mixed Placer Technology – Gigaplace XL

Sponsored by Cadence Design Systems

With the increase of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan with manual methods. Innovus Implementation’s advanced multi-objective placement technology, GigaPlace XL, provides automation to optimize at scale, concurrent placement of macros, and standard cells for multiple objectives like timing, wirelength, congestion, and power. This technology provides an innovative way to address design productivity along with design quality improvements reducing weeks of manual floorplan time down to a few hours.

Click here for more information about Innovus Implementation System

featured paper

Designing highly efficient, powerful and fast EV charging stations

Sponsored by Texas Instruments

Scaling the necessary power for fast EV charging stations can be challenging. One solution is to use modular power converters stacked in parallel. Learn more in our technical article.

Click here to download the technical article

Featured Chalk Talk

RX23W Bluetooth

Sponsored by Mouser Electronics and Renesas

Adding Bluetooth to your embedded design can be tricky for IoT developers. Bluetooth 5 brings a host of new capabilities that make Bluetooth integration more compelling than ever. In this episode of Chalk Talk, Amelia Dalton chats with Michael Sarpa from Renesas about the cool capabilities of Bluetooth 5, and how you can easily integrate them into your next project.

More information about Renesas Electronics RX23W 32-bit Microcontrollers