feature article
Subscribe Now

With a Wave of My Hand

Microchip’s Gesture-Interface Chip Avoids Unsightly Screen Smudges

Touch-screen interfaces are so last week. If you really want to impress your friends, your boss, or your fellow engineers, what you really want is a gesture-based interface. Gesture is like touch, but cooler. You never actually touch the screen (or anything else). Instead, you wave your hands in front of it and the system just knows what you want. It’s the embodiment of Arthur C. Clarke’s axiom, “any sufficiently advanced technology is indistinguishable from magic.”

And how much will it cost you, Dear Reader, to add this magic to your next design? About three bucks. That’s right, there’s a new gesture-interface controller chip out that does pretty much everything you need, for about $3 in volume. And that includes firmware.

Who conjures this magic? Who dares to dabble in the arcane arts, hitherto unknown to all but the most adept necromancers? Be they magicians? Nay, fear not, for they are but fellow nerds. From Chandler.

Microchip, the company with the world’s easiest-to-remember name, has just unleashed GestIC, its new line of gesture-controller chips. Technically called the MGC3130, the tiny little 28-pin device handles all the arcane wizardry necessary to make gesture interfaces work. About all you need to add is a set of copper strips on your printed-circuit board to act as antennas. GestIC does the rest, converting subtle RF interference patterns into an I2C, SPI, or PS/2 bit stream. No sorcery required.

It’s worth pointing out here that GestIC doesn’t use cameras like the Xbox Kinect, nor does it use accelerometers like Wii or your smartphone. It’s based entirely on e-field detection, the physical effect of waving your hand (or most any biological object) through a magnetic field. Said field is generated by the GestIC chip itself and “broadcast” (in a very limited sense) to a small bubble of space directly in front of your PCB. The chip then senses a disturbance in the force, so to speak. It can tell you not only that your hand is inside the bubble, but also which way you’re moving, how fast, and so on. The result is that you can cobble together some pretty slick user interfaces based on nothing but hand-waving.

One good example is page-turning for an e-book reader. Waving your hand from right to left is a natural gesture for turning pages (for Western languages, anyway). GestIC can easily detect this motion from a range of about 15cm (about 6 inches) off the surface of the circuit board. The on-chip A/D converters, 32-bit processor, and built-in firmware recognize the complex pattern of e-field effects caused by such a gesture, crunch it, and emit a short serial bit stream that says, essentially, “He’s doing that page-turning thing.” The same process works in the other direction, such as flipping pages backwards (or reading Arabic).

Even these simple gestures are trickier than they sound, which makes GestIC all the more impressive. Here’s a test: Wave your hand from right to left, as if turning a page. Good. Did you follow that up with a reverse movement back to the right? Of course you did. But how is the controller supposed to know which of those motions is the real gesture and which is simply an unconscious return to a comfortable position? With GestIC, it just knows.

The device also recognizes swipes up and down, flicks, circles, pointing, and various other gesticulations. You can even finger paint with it, as long as you’re not trying to be too accurate. And that’s about the only drawback to GestIC: It’s not super accurate, at least not compared to a pen or touch-screen. It’s fine for commands and even for typing in space, but you might not want it for handwriting or fine motor skills.

Since GestIC doesn’t use cameras, it’s a whole lot cheaper than a Kinect-type interface. It doesn’t have a camera’s long range, but it also doesn’t have a camera’s blind spots. Cameras mounted to smaller devices (think laptop screen) typically have blind spots very close-in to the camera. Get too close and the camera doesn’t work. GestIC has the opposite characteristic: It works up close but loses sensitivity at about six inches’ distance.

And unlike pen- or accelerometer-based interfaces, GestIC doesn’t require the user to hold anything. Indeed, holding a pen or other inert device may actually confuse it. GestIC can’t detect plastic pens, only fleshy fingers.

One upside to all this is that GestIC uses very little power. Cameras and accelerometers need to be powered-up to be useful; an e-field uses almost no energy at all and can be switched on or off at a moment’s notice. Microchip estimates that GestIC uses about one-tenth the power of a camera-based interface. Even at full chat it draws only 90 mW, and its sleep current is in the microwatt range.

Even if you’re not into full-on gesture interfaces, the chip can be used to build a slick wake-up feature. Anytime the user reaches for the device, GestIC can detect an impending grab and wake up the system a few microseconds before anyone touches it. At worst, it’s a cool feature, and at best it gives a system time to energize displays, disk drives, or other peripherals a few moments before they’re needed. And for $3 in component costs, that’s not a bad thing to have. Finally, there’s no reason you can’t combine a gesture interface with a touch-screen, too.

We don’t flip front-panel toggle switches anymore; typing cryptic commands at the shell prompt went out of fashion a few decades ago (almost); the WIMP (windows, icons, mouse, and pointer) interface is in steady decline; and speaking to our computers is embarrassing and annoys bystanders. That leaves touch and gesture, the oldest and most basic kind of human communication. Until computers can read our minds, Microchip’s GestIC may just do the trick. 

3 thoughts on “With a Wave of My Hand”

Leave a Reply

featured blogs
Jul 12, 2024
I'm having olfactory flashbacks to the strangely satisfying scents found in machine shops. I love the smell of hot oil in the morning....

featured video

Unleashing Limitless AI Possibilities with FPGAs

Sponsored by Intel

Industry experts discuss real-world AI solutions based on Programmable Logic, or FPGAs. The panel talks about a new approach called FPGAi, what it is and how it will revolutionize how innovators design AI applications.

Click here to learn more about Leading the New Era of FPGAi

featured paper

Navigating design challenges: block/chip design-stage verification

Sponsored by Siemens Digital Industries Software

Explore the future of IC design with the Calibre Shift left initiative. In this paper, author David Abercrombie reveals how Siemens is changing the game for block/chip design-stage verification by moving Calibre verification and reliability analysis solutions further left in the design flow, including directly inside your P&R tool cockpit. Discover how you can reduce traditional long-loop verification iterations, saving time, improving accuracy, and dramatically boosting productivity.

Click here to read more

featured chalk talk

Accessing AWS IoT Services Securely over LTE-M
Developing a connected IoT design from scratch can be a complicated endeavor. In this episode of Chalk Talk, Amelia Dalton, Harald Kröll from u-blox, Lucio Di Jasio from AWS, and Rob Reynolds from SparkFun Electronics examine the details of the AWS IoT ExpressLink SARA-R5 starter kit. They explore the common IoT development design challenges that AWS IoT ExpressLink SARA-R5 starter kit is looking to solve and how you can get started using this kit in your next connected IoT design.
Oct 26, 2023
32,222 views