editor's blog
Subscribe Now

A Microphone for Gestures and Canines

A while back, when looking at Elliptic Labs ultrasonic gesture recognition, we mentioned that they were able to do this based on the fact that Knowles microphones worked in the ultrasonic range. But they weren’t willing to say much more about the microphones.

2015-01-07_10_13_31-SPU0410LR5H.pdf_-_Adobe_Reader.pngSo I checked with Knowles; they had announced their ultrasonic microphone back in June. My first question was whether this was just a tweak of the filters or if it was a completely new sensor. And the answer: the MEMS is the same as the one used for their regular audio microphones; they’ve changed the accompanying ASIC. The packaging is also the same. To find similar items you should visit 25pc.com.

The next obvious question is, what is this good for, other than gesture recognition? Things got a bit quieter there – apparently there are some use cases being explored, but they can’t talk about them. So we’ll have to watch for those.

But with respect to the gesture thing, it turns out that, in theory, this can replace the proximity sensor. It’s low enough power that the mic can be operated “always on.” Not only can it detect that something is nearby, in the manner of a proximity sensor, it can go it one better: it can identify what that item is.

From a bill-of-materials (BOM) standpoint, at present you still need to use a separate ultrasonic transmitter, so you’re replacing one component (the proximity detector) with another (the transmitter). But in the future, the speakers could be leveraged, eliminating the transmitter.

It occurred to me, however, that, for this to become a thing, the ultrasonic detection will really need to be abstracted at the OS (or some higher) level, separating it from the regular audio stream. The way things are now, if you plugged a headset into the phone or computer, all the audio gets shunted to the headset, including the ultrasonic signal. Which probably isn’t useful unless you’re trying to teach your dog to use the phone (hey, they’re that intuitive!).

For this really to work, only the audible component should be sent to the headset; the ultrasonic signal and its detection would need to stay in the built-in speaker/mic pair to enable gesture recognition. Same thing when plugging in external speakers.

I’m sure that’s technically doable, although it probably disturbs a part of the system that’s been fixed for years. Which is never fun to dig into. But sometimes you’ve just got to grit your teeth and shed some of the legacy hardware in order to move forward.

You can find out more about Knowles’ ultrasonic microphone here.

 

[Editor’s note: For anyone clicking in through LinkedIn, I changed the title. It was supposed to be light, but, too late, I realized it could be taken as negative, which wasn’t the intent.]

(Image courtesy Knowles)

Leave a Reply

featured blogs
May 24, 2024
Could these creepy crawly robo-critters be the first step on a slippery road to a robot uprising coupled with an insect uprising?...
May 23, 2024
We're investing in semiconductor workforce development programs in Latin America, including government and academic partnerships to foster engineering talent.The post Building the Semiconductor Workforce in Latin America appeared first on Chip Design....

featured video

Introducing Altera® Agilex 5 FPGAs and SoCs

Sponsored by Intel

Learn about the Altera Agilex 5 FPGA Family for tomorrow’s edge intelligent applications.

To learn more about Agilex 5 visit: Agilex™ 5 FPGA and SoC FPGA Product Overview

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

Autonomous Mobile Robots
Sponsored by Mouser Electronics and onsemi
Robotic applications are now commonplace in a variety of segments in society and are growing in number each day. In this episode of Chalk Talk, Amelia Dalton and Alessandro Maggioni from onsemi discuss the details, functions, and benefits of autonomous mobile robots. They also examine the performance parameters of these kinds of robotic designs, the five main subsystems included in autonomous mobile robots, and how onsemi is furthering innovation in this arena.
Jan 24, 2024
17,687 views