posted by Bryon Moyer
We’re used to touch being about locating one or more fingers or items on a surface. This is inherently a 2D process. Although much more richness is being explored for the long-term, one third dimension that seems closer in is pressure: how hard are we pushing down, and can we use that to, for instance, grab an object for dragging?
At the 2011 Touch Gesture Motion conference, one company that got a fair bit of attention was Flatfrog, who uses a light-based approach, with LEDs and sensors around the screen to triangulate positions. At the 2012 Touch Gesture Motion conference, when 2D seemed so 2011, pressure was a more frequent topic of conversation. But clearly a visual technology like Flatfrog’s wouldn’t be amenable to measuring pressure since there is nothing to sense the pressure.
If you have a squishy object like a finger, then you can use what I’ll call the squish factor to infer pressure. This is what Flatfrog does: when a finger (for example) touches down, they normalize the width of the item, and then they track as that width widens due to the squishing of the finger (or whatever). Which means that this works with materials that squish. Metal? Not so much.
You might wonder how they can resolve such small movements using an array of LEDs that are millimeters apart. For a single LED and an array of sensors, for example, the resolution might indeed be insufficient. But because they have so many LEDs, the combined measurements from all of them allow them to resolve small micro-structures.
There is a cost to this, of course, in processing: it adds about 100 million instructions per second to the processing. “Ouch!” you say? Actually, it’s not that bad: their basic processing budget without pressure is about 2 billion instructions per second, so this is about a 5% adder.
More information at their website…
posted by Bryon Moyer
One of the booths I stopped by at CES was Philips, who was demonstrating their uWand. Turns out, this isn’t that new a product, having been introduced in 2009-10 (clearly I wasn’t paying attention then). In their view, the market is only now catching up to this kind of technology, as is clear with the variety of Smart TV and gaming remotes being designed and marketed.
The uWand uses a different approach than some of the other devices, which tend to be either IMU-based or regular-camera-based. The uWand relies on an IR camera in the remote, which tracks a row of 1 or more IR LEDs at the bottom of the TV screen (more LEDs providing better range and angle). In the discussion, the comparison was often made to benefits as compared to a gyroscope-based solution because gyroscopes are known to drift.
So I asked about compensated systems, where a magnetometer is used to correct for gyro drift. And another gentleman came by and flatly said that it doesn’t work. I tried to push and pull a bit; yes, magnetic anomalies complicate matters, but in a living room, you likely have a fixed set of magnetic artifacts, for the most part, so you’d think that they would be seen as a “common mode” artifact and be subject to removal. And sensor fusion is getting pretty good these days. And I’ve seen demonstrations of IMU-based remotes that seem to have good response.
Then again, I’ve never used one for a long period of time, so perhaps after an hour or two (more? less?) they need refreshing to work again. And I have seen some that need the figure-8 calibration. But, given the absolute nature of the, “It doesn’t work” declaration, I feel the need to toss the question out for discussion.
To be clear, the question is not, “Which is better, uWand or IMU-based?” The question is, “For the purposes of TV remotes, can an IMU-based system using suitable sensor fusion be made to work to the level that would satisfy a consumer?”
What say you?
posted by Bryon Moyer
MEMS technology is providing new ways to generate reliable frequencies that conventionally require bulky LC tanks and crystals. Granted, it’s early days (as other monolithic ideas are commercialized), but research proceeds apace, with bulk acoustic wave (BAW) technology now being added to the use of actual mechanical moving parts as candidates for commercialization.
The challenge with an approach requiring a moving part can be summed up in one word: release. While release is required for most MEMS, it’s always extra work to do, and avoiding it is tempting. The alternative to a moving mass is the use of waves transported in a solid, which is the BAW approach. The simplest such device involves two reflectors, top and bottom, but that involves back- and front-side etching.
So-called Bragg reflectors* eliminate the need for so-called “free surface” reflectors by using an sequence of two materials with different acoustic velocities. You typically have them alternating at quarter-wavelength distances, and, if you have enough layers, it acts like a reflector. This can be used at the bottom, for instance, to eliminate the need for all the backside work to get a “real” reflector in there. This is built using alternating thin films in a stack.
In that configuration, the waves travel vertically; there have also been attempts to do this laterally, some of which have challenges and some of which still require release. But a paper at IEDM takes a slightly different approach, using deep-trench capacitors to create the Bragg reflectors and the drive and sense elements.
The good news is that the spacing of the trenches can establish the frequency – that is, lithography provides flexible target frequency design (as opposed to having to rely on a deposited film thickness or etch depth). However, quality is somewhat traded off for manufacturability in that the spacing doesn’t necessarily follow the ideal quarter-wavelength target.
The other piece of good news is, of course, that the manufacturing steps are common for creating shallow-trench isolation (STI) on ICs. (I know, there’s the obvious question: make up your mind, is it deep trench or shallow trench? I guess that, by capacitor standards, it’s a deep trench; by isolation standards, it’s a shallow trench.)
Despite this tradeoff, the researchers claimed that their 3.3-GHz resonator, built on an IBM 32-nm SOI technology, approached the performance of similar suspended-mass resonators. If you have the IEDM proceedings, you can find the details in paper # 15.1.
*If you’re unfamiliar with Acoustic Bragg Reflectors, as I was, and want to Google it, be aware… most useful information appears to be locked behind the infamous pay walls. There were some bits and pieces I could salvage, but apparently such knowledge isn’t for us, the hoi polloi…