editor's blog
Subscribe Now

Elliptic Labs’ 3rd Gesture Dimension

Some time back we briefly introduced Elliptic Labs’ ultrasound-based gesture technology. They’ve added a new… layer to it, shall we say, so we’ll dig in a bit deeper here.

This technology is partially predicated on the fact that Knowles microphones, which are currently dominant, can sense part of the ultrasonic range. That means you don’t necessarily need a separate microphone to include an ultrasound gesture system (good for the BOM). But you do need to add ultrasound transmitters, which emit the ranging signal. They do their signal processing on a DSP hub, not on the application processor (AP) – important, since this is an always-on technology.

With that in place, they’ve had more or less a standard gesture technology, just based on a different physical phenomenon. They see particular advantage for operation in low light (where a camera may be blind), full sun (which can also blind a camera), and where power is an issue: they claim to use 1/100th the power of a camera-based gesture system. So… wearables, anything always-on. As long as you don’t need the resolution of a camera (which, apparently, they don’t for the way they do gestures), this competes with light-based approaches.

Image_240.jpg

What they’ve just announced is the addition of a 3rd dimension: what they’re calling multi-layer interaction (MLI). It’s not just the gesture you perform, but how far away from the screen you perform it. Or what angle you are from the screen.

For instance, starting from far away, with your hand approaching, at one point it would wake up. Come in further and it will go to the calendar; further still takes you to messages; and finally on to email. Of course, Elliptic Labs doesn’t define the semantics of the gestures and positions; an equipment maker or application writer would do that.

And it strikes me that, while this adds – literally – a new dimension to the interface, the semantic architecture will be critical so that users don’t have to mentally map out the 3D space in front of their screen to remember where to go for what. There will have to be a natural progression so that it will be “obvious.” For example, if you’re gotten to the point of email, then perhaps it will show the list of emails, you can raise and lower your hand to scroll, and then go in deeper to open a selected email. Such a progression would be intuitive (although I use that word advisedly).

A bad design might force a user to memorize that 1 ft out at 30 degrees left means email and at 30 degrees right means calendar and you open Excel with 90 degrees (straight out) 2 ft away and… and… A random assignment of what’s where that has to be memorized would seem to be an unfortunate design. (And, like all gesture technologies, care has to be taken to avoid major oopses…)

Note that they don’t specifically detect a hand (as opposed to some other object). It’s whatever’s out there that it registers. You could be holding your coffee cup; it would work. You could be using your toes or a baseball bat; it would work.

You can also turn it off with a simple gesture so that, for example, if you’re on your phone gesticulating wildly, you don’t inadvertently do something regrettable in the heat of phone passion. Or in case you simply find it annoying.

You can find out more in their announcement.

 

(Image courtesy Elliptic Labs)

Leave a Reply

featured blogs
Oct 19, 2021
Learn about key roadblocks to improve ADAS systems & higher levels of autonomous driving, such as SoC performance, from our 2021 ARC Processor Virtual Summit. The post Top 5 Challenges to Achieve High-Level Automated Driving appeared first on From Silicon To Software....
Oct 19, 2021
Today, at CadenceLIVE Europe, we announced the Cadence Safety Solution, a new offering targeting safety-critical applications and featuring integrated analog and digital safety flows and engines for... [[ Click on the title to access the full blog on the Cadence Community si...
Oct 13, 2021
How many times do you search the internet each day to track down for a nugget of knowhow or tidbit of trivia? Can you imagine a future without access to knowledge?...
Oct 4, 2021
The latest version of Intel® Quartus® Prime software version 21.3 has been released. It introduces many new intuitive features and improvements that make it easier to design with Intel® FPGAs, including the new Intel® Agilex'„¢ FPGAs. These new features and improvements...

featured video

Silicon Lifecycle Management Paradigm Shift

Sponsored by Synopsys

An end-to-end platform solution, Silicon Lifecycle Management leverages existing, mature, world-class technologies within Synopsys. This exciting new concept will revolutionize the semiconductor industry and how we manage silicon design. For the first time, designers can look inside silicon chip devices from the moment the design is created to the point at which they end their life.

Click here to learn more about Silicon Lifecycle Management

featured paper

Improving Design Robustness and Efficiency for Today’s Advanced Nodes

Sponsored by Synopsys

Learn how designers can take advantage of new ways to efficiently pinpoint voltage bottlenecks, drive voltage margin uniformity, and uncover opportunities to fine-tune operating voltages using PrimeShield design robustness solution.

Click to read the latest issue of Designer's Digest

featured chalk talk

Building Your IoT Toolbox

Sponsored by Mouser Electronics and Digi

December 17, 2020 - IoT design is a complex task, involving numerous disciplines and domains - including embedded design, software, networking, security, manufacturability, and the list goes on and on. Mastering all those moving parts is a daunting challenge for design teams. In this episode of Chalk Talk, Amelia Dalton chats with Andy Reiter of Digi International about development, deployment, manufacturing, and management tools for IoT development that could help get your next design out the door.

Click here for more information about DIGI XBee® Tools