posted by Bryon Moyer
Some of you may have come to this link already; if you did, you read a piece that voiced some confusion about the positioning of two new products from PointGrab. I had tried to do what I could with the information at the time, but I remained confused. Since then I have gotten much more specific information, and so the questions are removed, and I have redone what follows to explain more clearly what’s going on.
You may recall that my discussion with PointGrab last fall included discussion of an evolving gesture approach for screen-and-cursor based devices. The idea was that existing gesture-based approaches simply allowed the user to move the cursor on the screen as if using a mouse. The new approach was to bypass the mouse using gestures directly.
The new AirTouch product works along those lines, although rather than using gestures per se, it focuses on transforming the screen into a touchscreen that you don’t have to touch. You simply point and click, like you would on a tablet. Only you’re doing it from a distance.
This actually raises some experiential questions, since there is no specific feedback on the screen, like a cursor trailing around, to show you where you’re pointing. With a touchscreen, you literally touch an icon or button. But if you’re 20 feet away, well, you’ve got this virtual screen in mid-air, and it’s not obvious where everything lies. You can have the interface highlight buttons when you “hover” over them, so that could be a clue. But PointGrab says that, the way they’ve done this, it’s hard to explain, but within a second or three of trying, you feel oriented and it becomes extremely natural.
AirTouch itself does not support any gestures, but it can be combined with gesture products. In other words, electronics that use AirTouch can also provide gesture recognition, but it’s not AirTouch doing the gestures.
How do they figure out where you’re pointing? With a stereo camera that provides depth information and looks at where your pointing finger is relative to your eyes. The camera responds to the IR spectrum so that it can work in any room lighting conditions.
This product is targeted at “consumer electronics” – which needs a bit of unpacking. Frankly, these days, any appliance could be considered electronic. More typically, however, “home electronics” have referred to your entertainment center: stereo, speakers, television, VCR – er – DVD player, etc.
In this case, it refers to anything that could have a touchscreen (whether or not the screen is actually touchable). TV, computer, tablet, smartphone, or set-top box (as viewed through the TV). Not audio equipment (unless you somehow operate it through your screen).
Meanwhile, in the other corner, we have PointSwitch. This is a simpler, lower-cost solution for all of the “home environment” devices – frankly, all the other electronics (and lower-tech things like thermostats, dishwashers, lighting, whatever). This is a new market for PointGrab.
PointSwitch supports what I’ll call a “point plus” interface. By that I mean that your primary interaction is pointing, which works for toggle functions like on/off. But you can also raise or lower your pointed finger to do things like brighten or dim lights or change the thermostat temperature. More sophisticated gestures are not currently supported.
First of all, these kinds of interfaces have to be low-cost. So they’ve partnered to provide a simple, inexpensive camera module that fits unobtrusively into the devices. Of course, one challenge is obviously going to be specificity: if you have a light switch and thermostat and stereo and an apartment-style washer/dryer stack all within view at the same time, how do you keep from operating them all at once with a single pointing event?
They claim to have achieved high selectivity so that this doesn’t happen. Items need to be located at least 6” apart to avoid any confusion. One other possible issue could arise if, say, you have a thermostat positioned, say, 8” above a light dimmer switch. You want to dim the lights up, so you point at the light switch – no problem – and then move your finger up, and the dimmer automatically brings up the lights. Great. But at some point, because you’ve raised your finger, you’re pointing at the thermostat, even though you started below at the light switch. The thermostat doesn’t know that; it knows only that it’s now being pointed at. So this kind of positioning issue must also be considered in the overall design of a room.
While all PointSwitch does right now is handle this pointing interface, it opens the door to future features like detecting when the room is empty and turning down or off the lights or lowering the temperature.
Like AirTouch, PointSwitch responds to light in the IR range so that it can work in a completely darkened (or brightly washed-out) room.
posted by Bryon Moyer
The number of microphone output options just got bigger by one.
Typically, there have been analog microphones, where you get a real-deal audio signal to play with, or digital microphones. The question is, for the digital versions, what does that mean? How are the 1s and 0s to be interpreted?
Says Invensense, up to their latest release, all digital microphones take the audio signal, digitize it, and then run that signal into a codec that creates a PDM signal. For anyone not steeped in this stuff (including yours truly), PDM is “pulse density modulation.” In other words, the number of pulses-per-unit-time relates to the value being communicated. More pulses = higher number.
There are actually other potentially confusing PxM formats. PCM – pulse code modulation – is more or less the strict sampling result of an analog signal. It’s got a role in storing music on CDs, for example. PWM – pulse width modulation – is what you get if you take PDM and, instead of having discrete pulses per unit time, you abut them – that is, instead of five separate unit pulses, for example, you run them together to create one pulse five units long.
Most systems expect PDM signals in I2S format from their digital microphones. And I’ll be honest: when I first posted this, my mind mapped I2S to I2C. Which is incorrect. I2S is used to interconnect audio devices. Invensense sees an opportunity with some systems that take audio in but have no audio out. May sound a bit strange, since most audio is done for the pleasure of our ears. But, increasingly, we’ll be able to use sound to control our systems. So a smart watch or some other kind of wearable gadget might respond to our voice commands or other audio cues. They have no speaker, so they’re not reproducing sound for us; they just consume it.
And, apparently, such devices don’t need PDM. So, using typical digital microphones, they’d have to take the encoded data and decode it. Invensense has a new option for them: a microphone that simply skips the encoding step. What you get is the direct filtered output of the ADC, formatted for I2S. The idea is that this simplifies the design of the gadget.
You can get more info in their announcement of solutions for always-on wearables.
Meanwhile, the next day they also announced a new analog microphone…
[Editorial note: this was updated to correct the I2C vs. I2S error, as noted in the above.]
posted by Dick Selwood
Ten years ago today the Mars Rover Opportunity bounced its way on to the surface of Mars, at the start of a three month mission. In that time, as well as driving 24 miles, the little machine has added enormously to our understanding of the history of the planet.
And this is a huge endorsement of the team who put together the electronics. The development process started nearly twenty years ago, and by the time the mission launched most of the electronics used was, to put it kindly, mature. The central processor is a 32 bit Rad 6000 microprocessor, a radiation-hardened version of a Power PC that was launched in around 1965.
Just look around- how much of the electronics you own is ten years old and still functioning? What software are you using that ceased development around 15 years ago. That is a little unfair since the software on Opportunity has undergone several upgrades.
That in itself is quite mind-boggling. When did you last do a major software upgrade and how easily did it go?
This week there was another space event that was a tribute to system designers. The Rosetta mission to investigate comet Churyumov-Gerasimenko woke up after 31 months in hibernation mode the latest stage in journey that started almost ten years ago.
So It is possible to create systems that last for years- you just have to work hard at it.