Nov 12, 2013

Gesture Progress

posted by Bryon Moyer

At the recent Interactive Technology Summit (erstwhile Touch Gesture Motion), gesture was featured more on the day I was checking out the TSensors summit. But I did get a chance to talk both to PointGrab and eyeSight to see what has transpired over the last year.

These two companies both aim at similar spaces, gunning for supremacy in laptops, phones, and other household electronics (HVAC, white goods, etc.). Part of the game right now is design wins, and frankly, their design win reports sound very similar. So there seems to be plenty of business to go around – even to the point that it seems that in some cases, a given company is using them both. I don’t know if that’s to check them both out over time or to make them both happy or to use them as negotiation fodder against each other. To hear them tell it, business is good for everyone.

Development continues apace as well. One key change that’s happened in the last year is a move away from using gestures simply to control a mouse. Using the mouse model, for example, if you want to shut off your Windows laptop, then you gesture the mouse to go down to the Start button and do the required clicks to shut down the machine*. The new model is simply to have a “shut down” gesture – the mouse is irrelevant.

PointGrab has already released this; eyeSight has it in the wings.

I discussed the issue of universal gestures with PointGrab. There is an ongoing challenge of developing gestures that are intuitive across cultures (there aren’t many – some say one, some say two…). PointGrab doesn’t actually see this as a big issue; there’s room for everyone to acquire a simple, well-thought out gesture “lexicon” even if it means acquiring some new gestures that weren’t already used in that culture. Their bigger worry is that different companies will use different lexicons, rather than everyone settling on one set of gestures.

PointGrab has also announced what they call Hybrid Action Recognition. This is a way of making gesture recognition smarter, and it consists of three elements (not to be confused with three sequential steps):

  • Watching for movement that suggests that a gesture is coming
  • Looking for specific shapes, like a finger in front of the face
  • Disambiguating look-alike objects

This almost feels to me a bit like yet another form of context awareness: these three tasks establish a context that says, “Hey, this is a gesture; that last thing wasn’t.” At present, this is a static system; in the future, they will be able to make it learn in real time.

 

Meanwhile, eyeSight noted that, in the future, you may have several devices in a given room that are gesture-enabled. Perhaps a laptop, a TV, and a thermostat. If you gesture, which one are you talking to? Well, as humans, our primary indicator is by looking at the person we’re talking to. EyeSight is looking at providing this capability as well: a device would react to a gesture only if you’re looking at it.

They’re also looking farther down the road at more holistic approaches, including gaze, face recognition, and even speech. (As humans, we can talk to someone we’re not looking at, but we use speech to alert them that they’re who we’re talking to.) But this is a ways out…

As an aside, it was noted in a presentation that gaze in particular is good for broad-level use, but doesn’t work well for fine tracking since our eyes actually flit around at high speeds (saccadic movement) – activity that our brain smooths out so that we don’t notice it. A computer could tell that we’re looking at the computer easily enough, but it would have to do a similar smoothing thing in order to be able to identify, for example, which word we’re reading on the screen.

This whole gesture space seems to be moving extraordinarily quickly; there has been significant change in only one year. This is but one reason that it’s all done in software instead of hardware; updates can be anything but minor. The other reason, of course, is that this capability is going onto mainstream consumer devices. Requiring specific hardware would introduce a much higher barrier to inclusion.

This tension between hardware and software is actually going to be playing out in related spaces, but that’s a topic for another time.

 

 

*Unless, heaven help you, you’re on the original Windows 8, in which case you’ll gesture to move the mouse all over the place in a vain attempt to find where to shut things down; then you’ll give up and gesture to bring up your favorite browser to search for “How the #@$(&* do I shut down my @(#$&(# Windows 8 machine???” and find that you go in to Settings (???????) and a few more mouse clicks (really??) done by gestures and Bingo! In only 15 minutes, you’ve managed to shut it off, with only a 50 point rise in your blood pressure! I think that, by this whole Windows 8 fiasco, Microsoft is earning itself its own specific gesture. One that I won’t repeat here, this being a family newspaper and all.

Tags :    0 comments  
Nov 11, 2013

Going Expensive to Reduce Interposer Cost

posted by Bryon Moyer

Imec has been  working 2,5D IC issues with a particular focus on optimizing costs and, in particular, test yields. Yields can take what might have been straightforward-looking cost numbers and make things not so clear.

In their work on interposers, Eric Beyne took a look at three different ways of routing the signals from a wide-I/O memory. These puppies have lots of connections – like, 1200 per chip. He explored three different ways of implementing the interposer to find out which had the best cost outlook. The idea was to connect two such interfaces, with four banks of 128 I/Os each. Each channel had 6 rows of 50 microbumps. Microbump pitch along a row was 40 µm; along a column it was 50 µm. The two simply needed to talk to each other on the interposer.

The cheapest, most traditional approach is to use PCB (or PWB) technology. An aggressive version would have 20-µm pitch and 15-µm vias. This approach resulted in an 8-layer board; you can see the layout below – lots of routing all over the place. Wire lengths were, on average, 180% of the die spacing.

 

Laminate.png

 

Next was a semi-additive copper process – more aggressive dimensions and more expensive. Line pitch was 10 µm; vias were 7 µm. the tighter routing allowed connectivity with only 4 layers, and the average wire length was 166% of the die spacing. You can see the slightly less colorful result below.

 

RDL.png

 

Finally, they took an expensive approach: damascene metal lines. Moving from the PCB fab to the silicon fab. But this got them down to 2-µm pitch with 1-µm vias, and that was enough to run wires straight across on 2 layers with no extra routing. In other words, wire lengths were equal to the die spacing. You can see this on the following picture.

 

Damascene.png

 

So what happens to the overall cost? The last one is nice, but expensive to build. And here is where yield comes in. Because the “most expensive” option uses only two layers, it has the best yield. And that yield more than compensates for the expensive processing, yielding the cheapest option.

 

They didn’t give out specific cost numbers (they typically reserve those for their participants), but the net result is that they believe the damascene approach to be the most effective.

 

 

 

Images courtesy Imec.

Tags :    0 comments  
Nov 05, 2013

A Touch Technology Update

posted by Bryon Moyer

This year’s Interactive Technology Summit took the place of what has been the Touch Gesture Motion conference of the last two years. It was expanded to four days instead of two, and it was moved to downtown San Jose from last year’s Austin retreat (a posh golf resort a $30 cab ride from anything except golf). The content also took on displays in general as a topic to which the last day was dedicated.

So, with a broader brief, it braved the vagaries of being located where all the engineers are. In other words, where engineers can easily attend, but on their way to the conference, they can quick stop off at the office to take care of one little thing, and then this call comes in and then that email arrives and… well… maybe they won’t make it after all. In other words, attendance did seem a bit sparse. (And I’m speculating on the reasons.)

I had to pick and choose what I checked out, given that the TSensors conference was happening in parallel with it (I don’t seem to make a good parallel processor). My focus has generally been on the “smart” aspects of this technology, namely touch, gestures, and motion. There wasn’t much new in the motion category; we’ll look at touch today and then update gestures shortly.

The last two years seemed to be all about the zillion different ways of recording touches. This year there was less of that, but Ricoh’s John Barrus dug deeper into the whole pen/stylus situation. For so long, it seems that we’ve been so wowed by touch technology that has enabled more or less one thing: point and click. (OK, and swipe and pinch.) We’ve been all thumbs as we mash our meaty digits into various types of touch-sensitive material (mostly ITO).

The problem is, our fingers are too big to do anything delicate (well, speaking for myself anyway, as exemplified in my abortive attempt to play a word scramble game on the back of an airplane seat). And they cover up what we’re trying to touch. Which is largely why I’ve resisted the wholesale switch from things like keyboards and mice to everything-index-finger. (Yeah, some teenagers think they can two-finger type quickly… perhaps they can, for two fingers, but I’ll blow them away any day with a real keyboard, which is important when what you do is write for a living…)

So I was intrigued to see this presentation, which took a look at a wide variety of stylus and pen approaches, both for personal appliances as well as large-format items like whiteboards. Two clear applications are for note-taking and form-filling. I needed a new laptop recently, and I researched whether stylus/touchscreen technology was fast and fine enough for me to get rid of my notebooks and take notes onscreen. (I don’t like to type in meetings – it’s noisy, and I somehow feel it’s rude.) The conclusion I came to was that no, it’s not ready for this yet. So it remains an open opportunity.

He also noted that tablets with forms on them were easily filled out in medical offices by people that had no computer experience; just give them the tablet and a stylus, and it was a very comfortable experience. (Now that’s intuitive.) (And if you wonder why this works for forms but not for notes, well, you haven’t seen my handwriting…)

The technologies he listed as available are:

  • Projected capacitance (“pro-cap”)  using electromagnetic resonance with a passive pen (i.e., no powered electronics in the pen)
    • The pen needs no battery
    • Good palm rejection – ignores the surface when the pen is in use
    • Good resolution
    • Pressure-sensitive
    • But higher cost
    • Samsung has tablets using this (Galaxy Note)
  • Pro-cap with an active pen
    • Can be done in a large format (Perceptive Pixel has done up to 82”)
    • Similar benefits to the prior one
    • But again, higher cost plus the pen needs a battery
  • “Force-sensitive resistance”
    • Grid of force-sensitive resistors
    • Pressure-sensitive
    • Multi-touch
    • Scales up well
    • But it’s not completely transparent.
    • Tactonic Technologies uses this.
  • There’s an interesting “time-of-flight” approach where the pen sends a simultaneous IR blip and ultrasonic chirp; the delays are used to triangulate the pen position
    • The cost of this is lower
    • Multiple pens can be tracked independently (say, for whiteboards with more than one person)
    • But it’s not really a touch technology; it’s pen-only
    • The Luida Ebeam Edge and Mimio Teach use this

Then there are a bunch of light-based approaches, some of which we’ve seen before. But, unlike the screens that carry light through the glass, most of these project and detect light above the surface. One simple such approach is to use light and detect shadows. This approach is used by Smart Technologies and Baanto.

Other large-format (i.e., whiteboard) installations rely on a camera and some kind of light source.

  • In some cases, the source is the pen itself, which emits an IR signal (Epson BrightLink).
  • In another, a projector casts a high-speed structured pattern onto the surface that’s sensed by the pen (TI’s DLP technology in Dell’s S320wi).
  • Another version sends an IR “light curtain” down over the surface; a camera measures
    reflections as fingers (or whatever) break that curtain (Smart Technologies LightRaise)
  • There’s even a whiteboard with a finely-printed pattern and a pen with a camera that detects the position and communicates it via BlueTooth (PolyVision Eno).

The general conclusion was that the various technology options work pretty well; it’s mostly cost that needs to be solved.

There are also some issues with collaboration for whiteboard and video conferencing applications, but we’ll cover those later.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register