posted by Bryon Moyer
My whimsical piece regarding an airplane touchscreen caught the eye of Touch International. They make touchscreens for airplanes and cars and other high-rel applications; they’ve been doing this for a long time. (I honestly don’t know if they made the screen I was whacking on.)
We met at the Interactive Technology Summit (erstwhile Touch Gesture Motion). It was interesting to contrast our discussion with some of the other things that I was hearing at the show. Touch Int’l makes all their own touchscreens, but they don’t lead the industry in R&D; to use CEO Michael Woolstrum’s phrase, they’re more about “applied science,” using established technologies in custom applications at moderate volumes.
And yet, while folks in the conference presentations talk about someday being able to do curved touchscreens, apparently Touch Int’l has been doing them since the 80s. To be clear, that’s “1D” curved, such as might come off of a roll. 2D curved, which you could fit over a spheroidal sort of shape, is coming, but isn’t here yet. For Touch Int’l or anyone else.
We also discussed the implications of touchscreens in some of the applications they address. Cars, for instance, presumably in an attempt to attract people with pseudo-whiz-bang cool-looking technology, have dropped all the easy-to-use knobs we (or our forebears) used to use intuitively. Instead, we’re faced with impenetrable GUIs that we must learn anew for each car, taking valuable time away from minor things like looking at the road.
I asked what the benefit of that really was (and, to be clear, this is pre-office-and-hometheater-in-the-car center stack), and apparently electronics are more reliable. I cocked my head a bit at that: phones used to be robust (you know, the old black Ma Bell ones that you could drop with impunity?) and they advertised that fact. Until they went more electronic. (I actually had a phone store salesman specifically say that the vaunted reliability no longer applied to new phones… this in the 80s.) And I owned a Mercedes at one point that seemed to need a lot of work. I talked to another Mercedes owner who crowed about the reliability. When I asked further, he clarified: the old ones were reliable; the newer ones with electronics were not. And I’ve never owned a car where the (now electronic) radio wasn’t the first thing to fail.
So hearing that electronic versions are more robust than the mechanical ones surprised me. I just assumed they were cheaper or looked cool or something… Mr. Woolstrum did agree that they can be confusing to use. In fact, he proposed a compromise that he thought optimal: putting mechanical controls over a touchscreen. That combines the ease-of-use and familiarity of knobs and such over a touchscreen that actually does the work. Interesting idea.
So next time I’m banging away at a touchscreen in a car or in a plane, I’ll have a name and a face to associate with it. And they’ll probably wonder whether that’s a good thing…
posted by Bryon Moyer
I covered the recent TSensors Summit previously, having attended for one of the three days. That day happened to be dedicated to healthcare, and there were a few interesting points worth noting.
First, I have to say, I was surprised at the number of people that said, “We have the best healthcare system anywhere, and I wouldn’t change it a bit,” followed by a litany of problems with our healthcare system. I don’t know if it was some patriotic thing or an anti-ACA statement or what; it just struck me as incongruous to say that everything is great and then list the things that suck, including facts indicating better outcomes in other countries.
Dr. Mark Zdeblick, of Proteus Digital Health, made an interesting observation: most of today’s electronic healthcare gadgets are for healthy people. These are the things that tell you how many miles you ran or how much of whatever else that only healthy people can do you did. We haven’t actually gotten to the point of improving healthcare yet; we’re mainly maintaining it (for the techno-savvy that can afford it [my editorial, not his]).
The kinds of longer-term items we’re talking about here are patches and ingestibles and such. This is where the daily patch measuring calories in/out to help reduce obesity, if taken by a billion people, gets a third of the way to a trillion yearly sensors. Of course, if it succeeds and we have no more obese people, then that goes against the desire to ship lots of sensors. So we’ll either need a new application or we’ll need to get people to lose some weight, but not enough to be healthy and drop the patch.
Microfluidic labs-on-chips were also a topic, and in particular, it was noted that there are no good design tools for these. Chips and connectors and MEMS have design tools to help, although chips obviously have the most evolved tools due to their complexity and volume. MEMS and other mechanical devices (like connectors) have tools, but abstraction is further behind there (and may not be needed for the simple things like connectors). No such abstraction exists for microfluidics. Opportunity for an EDA company?
Finally, as noted in the other piece, silicon will not be the answer. Part of that is cost – a big part – but part of it relates to putting things on or in the body. Silicon can be used, but sending something with sharp corners and edges through an artery sounds less than savory, so when used, they have to be encapsulated in ways that will be friendly to the body. Lots of work there for folks doing materials and packaging and connections – particularly wireless connectivity.
One quick afterthought: the only really uncomfortable moment in the day occurred when we had to look at that woman’s colon for far longer than seemed necessary. Um… yeah… Nuff said.
posted by Bryon Moyer
I saw an announcement about a new MEMS diagnostic instrument, the M150 from Ardic. It uses a laser to measure the frequency response of a MEMS element. Sounds simple enough. Or it did until I started thinking more about it. After all, lasers require line of sight. And most MEMS elements are far out of sight. And if you bring them into sight, then you may have changed the environment (air instead of vacuum or controlled gas, microphone packaging and cavities, etc.) So how does that affect the measurement?
I had an interesting chat with founder and CEO Edward Chyau to get some perspective on the tool. First, the main use cases: this is typically used either to verify a design early on, for occasional production monitoring, or for diagnosing failed units. So it’s not a high-volume in-line production system. It’s set up to handle dice, although they have had requests for a unit that could handle 6” wafers. (In theory, on a wafer, neighboring dice could interact since they’re mechanically connected, but he said that the actual element movements are so small that by the time their vibrations reached the neighboring die, it would have attenuated to a negligible level.)
And yes, the moving element must be visible. An InvenSense gyro, for example, would need to have its capping ASIC removed to take a measurement. So there will be some difference between the elements behavior under the laser and its behavior in the wild. He says that the tests being run look for significant resonances within the region of interest, indicating that some mechanical element has a resonant mode, due to a design error or a production defect, that can interfere with normal operation. Those modes might shift frequency a little due to depackaging, but not a lot.
So, for example, if a microphone diaphragm exhibited a resonance at 1 MHz, then it wouldn’t be an issue. But if your window was up to 50 kHz and it had a 55 kHz mode, you’d want to pay attention even though it’s nominally outside the window.
It might be possible to anticipate any such shifts at design time. If design simulations were done assuming a vacuum, for example, then it would be possible to redo the simulations without a vacuum to establish a baseline expected behavior for laser probing. That could help set expectations for any change in behavior for correlation if that were a concern.
I also wondered if the laser itself would heat the element during the measurement, changing the performance as it heated. He said that the laser is low-power, like that on a CD-ROM drive, so no noticeable heating should occur during the 60-second scan.
That scan is part of what differentiates this unit from existing devices. Current technology uses laser Doppler interferometry, and it’s largely targeted at the academic environment with relatively high cost and a long learning curve. And a measurement can take 1-2 hours, with reasonably fussy setup.
By contrast, the Ardic system uses astigmatic detection, also used in their atomic-force microscope (AFM) tools. This gives them particularly good z-axis resolution, and they can scan from 1 Hz to 4.2 MHz in 60 seconds. Users often simply run that entire range even if they only need a subset just because it goes so fast.
I also asked whether this technique could apply down to the NEMS range, where quantum effects start to matter. And, in fact, here you’re dealing with features below the visible wavelength. So getting a laser focused that small and capturing the echo isn’t really feasible.
He said that you would likely need to take a more indirect approach. For example, you might have a MEMS-scale cantilever that acts as an atomic-force probe on the NEMS element. It doesn’t actually make contact (although I suppose that, on an atomic scale, where most of the atom is open space, “contact” is not such a clear concept); it just comes within Van der Waals force range. This allows the cantilever to track the motion of the NEMS element; the laser can then be used to measure the cantilever.
But, of course, the cantilever would have its own resonant modes, so you’d have to measure the cantilever first so that you could subtract its behavior out of the combined signal to isolate the NEMS element signal. This hasn’t been done yet, but he saw it as the kind of approach that would be required.
You can read more in their announcement.