editor's blog
Subscribe Now

A Microphone for Gestures and Canines

A while back, when looking at Elliptic Labs ultrasonic gesture recognition, we mentioned that they were able to do this based on the fact that Knowles microphones worked in the ultrasonic range. But they weren’t willing to say much more about the microphones.

2015-01-07_10_13_31-SPU0410LR5H.pdf_-_Adobe_Reader.pngSo I checked with Knowles; they had announced their ultrasonic microphone back in June. My first question was whether this was just a tweak of the filters or if it was a completely new sensor. And the answer: the MEMS is the same as the one used for their regular audio microphones; they’ve changed the accompanying ASIC. The packaging is also the same.

The next obvious question is, what is this good for, other than gesture recognition? Things got a bit quieter there – apparently there are some use cases being explored, but they can’t talk about them. So we’ll have to watch for those.

But with respect to the gesture thing, it turns out that, in theory, this can replace the proximity sensor. It’s low enough power that the mic can be operated “always on.” Not only can it detect that something is nearby, in the manner of a proximity sensor, it can go it one better: it can identify what that item is.

From a bill-of-materials (BOM) standpoint, at present you still need to use a separate ultrasonic transmitter, so you’re replacing one component (the proximity detector) with another (the transmitter). But in the future, the speakers could be leveraged, eliminating the transmitter.

It occurred to me, however, that, for this to become a thing, the ultrasonic detection will really need to be abstracted at the OS (or some higher) level, separating it from the regular audio stream. The way things are now, if you plugged a headset into the phone or computer, all the audio gets shunted to the headset, including the ultrasonic signal. Which probably isn’t useful unless you’re trying to teach your dog to use the phone (hey, they’re that intuitive!).

For this really to work, only the audible component should be sent to the headset; the ultrasonic signal and its detection would need to stay in the built-in speaker/mic pair to enable gesture recognition. Same thing when plugging in external speakers.

I’m sure that’s technically doable, although it probably disturbs a part of the system that’s been fixed for years. Which is never fun to dig into. But sometimes you’ve just got to grit your teeth and shed some of the legacy hardware in order to move forward.

You can find out more about Knowles’ ultrasonic microphone here.

 

[Editor’s note: For anyone clicking in through LinkedIn, I changed the title. It was supposed to be light, but, too late, I realized it could be taken as negative, which wasn’t the intent.]

(Image courtesy Knowles)

Leave a Reply

featured blogs
Sep 25, 2018
Robots are a hot topic these days. From helping doctors in the operating room to helping survivors of a natural disaster, it'€™s clear that robots are poised to play an important role......
Sep 25, 2018
CDNLive India took place a few weeks ago and we are just trying to catch our breath! If you missed it, I'm going to be posting two cool videos before the weekend with the highlights. Here are two blogs by the veteran blogger Paul McLellan - one on Asynchronous Design and...
Sep 24, 2018
One of the biggest events in the FPGA/SoC ecosystem is the annual Xilinx Developers Forum (XDF). XDF connects software developers and system designers to the deep expertise of Xilinx engineers, partners, and industry leaders. XDF takes place in three locations this year.  Sa...
Sep 21, 2018
  FPGA luminary David Laws has just published a well-researched blog on the Computer History Museum'€™s Web site titled '€œWho invented the Microprocessor?'€ If you'€™re wildly waving your raised hand right now, going '€œOoo, Ooo, Ooo, Call on me!'€ to get ...