posted by Bryon Moyer
Inertial measurement units (IMUs), once cool and shiny in their new MEMS editions, are now familiar old friends. We’ve become accustomed to motion sensors in our phones, so this particular integration of linear and rotational acceleration feels rather established in comparison with some of the new sensors being considered for consumer use.
But there’s still stuff going on in the motion world, and a few of the recent announcements seemed worthy of note. Oddly enough, they all came out within a week of each other; we’ll simply take them in chronological order.
First, ST Microelectronics announced what they say is the world’s smallest six-axis motion sensor, the ASM330LXH. The package is a 3x3x1.1-mm3 land grid array; the size comes largely from integrating all six individual sensors on a single chip. Ranges are 2/4/8/16 g for the accelerometer and 125/245/500/1000/2000 dps for the gyroscope.
While targeted for consumer applications in general, there are lots of hints in their release that they’d love to see some of these in cars. They’ve been qualified for “non-critical” automotive apps, which, surprisingly, includes navigation. I suspect that will change when driverless cars arrive…
Bridging bulk and surface
A few days later, ST made another announcement – this not so much about a specific sensor as much as a fundamental process they’re using for their motion sensors. In fact, this process is used for the 6-axis device we just discussed, which they announced the week before.
The deal here is about what they call their THELMA process, which distinguishes itself by having a thicker epi layer than normal – 60 µm. Perhaps a little background here will help provide some context.
Motion sensors have “proof masses” – chunks of mass that react to changes in motion. The sensors detect the effects of motion on the proof mass (exactly how varies by sensor) to provide the readings. In general, the heavier the proof mass, the better the signal.
Bulk-machined proof masses are carved out of the bulk silicon wafer. They’re beefy and perform well, but they’re also more expensive to make. Surface-machined proof masses, by contrast, aren’t built from the wafer itself; they’re made by growing a layer of epitaxial silicon on the wafer. This is much easier to fashion into a proof mass, so it’s how inexpensive consumer-grade sensors are typically made. You lose something in performance; it’s a “you get what you pay for” thing.
According to ST, the typical epitaxial layer for surface machining is 25 µm. By making that layer thicker, they’ve added more bulk to the proof mass; the idea, then, is that their motion sensors should perform better than your typical surface-machined device while still keeping much of the cost advantage.
Compass and Gyro
The next day, mCube announced the smallest eCompass and “iGyro.” I suspect not coicidentally, these are both accelerometer/magnetometer combo sensors. In both cases, the magnetometer takes the lead role, with the accelerometer providing corrections.
In the case of the eCompass, the accelerometer provides “tilt compensation” since most of us can’t hold a compass exactly flat. The accelerometer can detect the acceleration of gravity and therefore knows which direction “down” is, and sensor fusion software can then provide a corrected compass reading.
The iGyro is a “soft” or “emulated” gyroscope. Here the magnetometer provides the rotational information, but the accelerometer is used to help reject “magnetic anomalies” – big metallic items that can distort magnetometer readings.
So, in reality, the difference between the two devices is the sensor fusion software used to turn the raw sensor signals into either compass direction or angular rate outputs.
These aren’t new to mCube, but they announced their smallest versions, both in 2x2x0.95-mm3 packaging.
Finally, Silicon Designs announced a new line of accelerometers suitable for vibration sensing, the SDI Model 1510 Series. These appear to be rather different from the highly-integrated-and-digitized sensors going into consumer devices: this provides analog outputs. Then again, vibration sensing isn’t something people have been asking for in their phones. Unless, perhaps, to help distinguish between a real silent incoming call from that phantom vibration feeling.
It can be used to measure either DC or AC acceleration, with a single-ended or differential output. Different family members provide an acceleration range from 5 to 100 g.
That’s all the IMU news for now. Well, actually not: there’s one other spin on motion sensing that we’ll look at next week. It’s worth a separate discussion.
(Images courtesy mCube, Silicon Designs)
posted by Bryon Moyer
It’s high noon at IEDM. Both Intel and IBM have “late-breaking news” with their 14-nm FinFET numbers. The giant room is filled to bursting capacity. I’m lucky enough to have some space along the side wall, far from the screen. So far, in fact, that much of what’s on the screen is completely illegible.
Oh, and did I mention photography is not allowed? So… you can’t see the information, you can’t record it even if you saw it… you could busily write what little you can see but then you’re not listening… Oh well, the paper is in the proceedings and I should be able to get the slides after the fact. Right?
Nope. IBM politely declined. Intel didn’t respond at all. (Good thing the proceedings have contact information…) So if the paper is the only record of what happened, then why bother with the presentation? Except for those in the center of the room…
Yeah, I was frustrated, since, in these presentations, you can get a better sense of context and perspective, but only if you have a photographic memory. And I don’t. (And getting less so with each day.) There were definitely points that were made in the presentations that are not in the paper… so I can’t report them.
The whole deal here is Intel’s 14-nm bulk-silicon process vs. IBM’s 14-nm SOI process. And here’s the major takeaway: cost and performance have improved. Moore’s Law, reported as dead at the leading nodes, has taken a few more breaths. It’s just like the good old days, where area shrunk enough to make up for increased costs, and performance gained substantially.
I was going to compare some numbers here, but it’s too spotty to find numbers that they both reported in their papers. For instance, IBM reports a 35% performance improvement over 22 nm; as far as I can tell, Intel reported a performance improvement in the presentation, but didn’t put it in the paper. (I assume that’s intentional.)
Some notable process points:
- Has a dual-work-function process that allows optimizing both low- and high-VT devices without resorting to doping. No details provided on that process.
- 15 layers of copper
- Includes deep-trench embedded DRAM.
- Uses sub-fin doping.
- Fin is now much more rectangular than their last edition.
- 13 interconnect layers
- They use air-gapped interconnects: pockets of air between lines on select metal layers that reduce capacitance by 17%. They were not willing to discuss how they do the air-gapping, just that they do.
- Their random variation for VT, which grew from node to node for many nodes, is almost down to where it was at the 90-nm node.
Select images data follow…
[Suggestion to IEDM: require that presentations be made available. They shouldn’t be presenting material if they don’t have the cojones to stand behind it after the presentation…]
All images courtesy IEDM.
posted by Bryon Moyer
The Touch Gesture Motion conference (TGM) covers various technologies related to up-and-coming human-machine interface approaches. And its middle name is “Gesture.” How we doin’ there?
Well, first off, some of the consistent names in gesture – regular faces in past years – were not present this year. That caught my eye. And then there was an interesting presentation providing evidence that consumers aren’t delighted with gesture technology. Another red flag.
So let’s look at some evidence and then go over some of the challenges that gesture technology may need to overcome.
I personally only have one piece of evidence, which, scientifically, would be considered not evidence, but an anecdote. I wrote about it before: answering a phone call overlapped with a hang-up gesture. Yeah, you can see where that went.
But there’s another source: a company called Argus Insights monitors… um… well, online social discussion. And they intuit from that how people are feeling. Note that this doesn’t really provide information on why folks are reacting the way they are; it simply provides the reaction.
They get this by mining the social media buzz surrounding various products. They check not only the amount of discussion, but they also characterize whether it’s positive or negative. For instance, they found that the Samsung Galaxy S3 started with a 0.75 “delight” rating, but the S4 had a rather rocky debut, starting as low as 0.25 and eventually crawling up to about 0.70 or so. Later, the S5 nailed it at around 0.85 or so out of the chute, declining to around 0.8.
Depending on how they mine this stuff, they extract information on different aspects of technology. I’m not privy to the details of how they do the extraction (if they were my algorithms, I certainly wouldn’t make them public), so I can’t swear as to the accuracy, but folks are listening.
And here’s what Argus says about gestures: consumers are not thrilled. The following chart shows consumer reaction to touchscreens, touchscreen responsiveness specifically, and gesture recognition – and the latter shows a pretty dramatic dropoff.
Click to enlarge. Graph courtesy Argus Insights
While this data doesn’t provide cause, other presentations and discussions from the conference can shed some light. In fact, it’s not easy to see why it might be a problem.
John Feland, cofounder and CEO of Argus Insights, related one incident where he was consulting with a system house, and they declared, “We should assemble a vocabulary of 35 gestures!” as a response to other systems having growing gesture vocabularies. As if the number of gestures defined success. As you might imagine, Mr. Feland advised against that.
Why? Because who wants to memorize 35 gestures? OK, perhaps it’s possible – if we, as a culture, standardize on gestures and start teaching kids at an early age, the way we do typing keyboarding today. It becomes ingrained and we carry it with us the rest of our lives.
But that’s not what’s happening. Each system maker has its own vocabulary. Those vocabularies are enabled, typically, by separate companies specializing in gesture technology. Those providers each have different vocabularies. And those vocabularies sometimes relate to the technology used to see the gestures. Is it ultrasound? A camera? What camera technology?
So it’s not a matter simply of learning 35 gestures. In fact, let’s drop the issue of too many gestures; let’s assume there are, oh, eight. That’s not many – especially with symmetries (up/down/left/right are probably – hopefully – analogous). But if you have two tablets in the house and three phones and an entertainment system, each of which has eight gestures, and they’re all a different set of eight gestures, then you have to remember for each system which gestures do what. Kids, with their annoying plastic minds, can probably do that. Adults? Not so much. (OK, we could. But we’re old enough to have other things to do with our time and gray matter.)
Of course, the solution is to standardize on eight gestures to be implemented throughout the industry. Yeah, you can imagine how fun that discussion would be. In addition to picking the eight, you’d also want to be culturally sensitive, meaning a different eight for different cultures, meaning also defining which cultures get their own and where the boundaries will be. Great rollicking fun for the entire family to watch if UFC isn’t on at the moment.
And it’s not just the gestures themselves. There are also… what to call them… framing issues. How do you end one gesture and start another? One system might do gestures all in a single plane; in that case, pulling your hand back towards you could be interpreted as ending a gesture. But another system might use a pulling-towards-you gesture for zooming, with some other way of indicating that the gesture is complete.
My own observation is that gesture technology has largely been viewed as a cool thing to bolt onto systems. And let’s be clear on this: it is cool. At least I think it is. That simple cameras or other devices can watch our hands and sort out what we’re doing in complicated scenes and settings is really amazing.
But it also feels like we’ve added them to systems in an, “Isn’t this cool??” manner instead of an, “Isn’t this useful??” way. And consumers like cool for only so long, after which they get bored – unless it’s also useful. Which would be consistent with higher satisfaction early and then a drop off.
Probably the biggest question ends up being, is it useful enough to generate revenues that will fund the further development and refinement of the technology? That value question has also not been unambiguously decided one way or the other.
So there are lots of data points here; they all suggest that there’s more to be done. I’ll leave it to the participants in this battle to decide the best fixes… or you can add your own thoughts below.