Mar 25, 2014

Wide-Ranging Approaches to Ranging

posted by Bryon Moyer

As I’ve mentioned before, there are constants at ISSCC (e.g., sessions on image processing and sensors) and then there are the circuits-of-the-month. Ranging seemed to be one of the latter, showing up in both image-processing and sensor sessions. So I thought I’d summarize some of the widely differing approaches to solving issues related to ranging for a variety of applications.

For those of you following along in the proceedings, these come from sessions 7 and 12.

Session 7.4 (Shizuoka University, Brookman Technology) offered a background-cancelling pixel that can determine the distance of an object using time-of-flight (ToF). As you may recall, ToF is more or less like light radar (LIDAR?) where the arrival of the reflection of a known emitted light gives you the distance.

There are four lateral gates in this pixel, directing charge from impinging light into one of three floating diffusion areas (the fourth gate simply discharges the pixel).

Background cancellation has historically been done by comparing adjacent frames, but quick motion can create strange artifacts. So at the beginning of the capture cycle for this work, the background is measured and stored in the first diffusion for subtraction. Then the emitter turns on and collection moves to the second diffusion. The reflection may also return during that time; when the emitter shuts off, then collection changes to the third diffusion. The difference between those two charge amounts gives the distance.

Session 7.5 (Shisuoka University) addresses the challenge of doing high-precision ranging for the purposes of, say, modeling an object. The problem is that, to get higher resolution, you ordinarily need to separate the light source from the imager by a wide angle. That’s hard to do in a small device. Such devices typically have resolution in the few-cm range, which isn’t much use for object modeling; this work achieved 0.3-mm resolution.

The keys were three:

  • They use an extremely short (< 1 ns) light pulse.
  • They used a drain-only modulator (DOM) – by eliminating the lateral pass gate, they get a faster response. The pixel itself can only accumulate or drain.
  • They capture all of the pixels at once, but the tight timing brings another issue: skew between pixels is no longer noise, but can screw up the measurement. So they implemented a column deskew circuit and procedure.

Microsoft weight in in Session 7.6 (they couldn’t help putting a flashy brand on their opening slide – something that you generally don’t see at ISSCC, but I guess the marketing guys need something to prove their value, even if it meant being tasteless). This was an improved Kinect ranging system where the challenge is in accommodating both distant low-reflectivity (i.e., low-light) and close-in high-reflectivity (i.e., high-light) objects. Pretty much your classic dynamic range issue complicated by the distance thing.

They have decoupled the collection of charge in a floating diffusion and an “A or B” assignment that will be used to calculate the distance. They use A and B rows as inputs to a differential cell. A high-frequency clock alternates A and B activation during collection; this means that the assignment to A or B, determined by the clock, happens simultaneously with charge collection. The transfer to a floating diffusion can then happen afterwards, at a leisurely pace (to use their word).

They also implemented a common-mode reset to neutralize a bright ambient. And each pixel can set its gain and shutter time; this is how they accommodate the wide dynamic range.

Meanwhile, over in Session 12, folks are using other sensors for ranging. In Session 12.1 (UC Berkeley, UC Davis, Chirp Microsystems), they built a pMUT (piezoelectric micro-machined ultrasonic transducer) array to enable gesture recognition. Think of it as phased-array radar on a miniscule scale. They process the received signals by phase-shifting – basically, beamforming – in an attached FPGA.

Within the array, some pMUTs (think of them as ultrasonic pixels, sort of) are actuated to send a signal, others listen to receive the reflection, and some do both. They can decide which of these to do for optimization purposes on a given application.

They also want to sample at 16x the resonant frequency of the sensors to lower in-band quantization noise and simplify the cap sizing. (No relation to an unfortunate boating incident.) But that means they need to know the actual, not approximate, resonant frequency for a given device – natural variation has to be accommodated, as does response to changing environmental conditions like temperature.

To do this, they have a calibration step where they actuate the sensors and measure their ring-down, using the detected frequency to set the drive frequency of the actuator. This calibration isn’t done with each capture; it can be done once per second or minute, as conditions for a given application warrant.

As always, the details on these sessions are in the proceedings.

Tags :    0 comments  
Mar 24, 2014

IoT Update: I Give Up

posted by Bryon Moyer

Last year I proposed an overall architecture for the Internet of Things (IoT). The goal was to clarify the many different pieces required to make this work. And, in particular, to clarify which companies do which parts of the IoT.

There are so many companies that say the “enable the IoT.” But what does that mean? Last year, it could have meant many things, and so I tried to make some sense out of it. My intent was to come back and revise and refresh that effort.

That’s what I started to do recently – until throwing my hands up in dismay. There are so many companies claiming to participate in this business, and there’s typically not enough information available to place them properly in the various categories I set up. I have updated the table below, but only to the point where I surrendered.

You could argue that, as a journalist, I should be digging into each and every one of these companies to ferret out the truth. Up to a point, I agree; that’s what I did before. But after a while, I realized that I was turning into an industry analyst.

In reality, it would keep me from doing anything else for a while. Truly fleshing things out now would be something of a full-time job for a while.

Meanwhile, the number and range of companies tying their pitches to the IoT has ballooned. I could probably tie sneakers to it… let’s see… the first commercial application of a special new rubber in the soles, the volume sales of which will provide the revenues necessary to research new elastomers in home widgets that can be connected to the IoT! Boom! “New Footwear Supports the IoT”

<sigh>

So I’m going to keep watching for and covering interesting IoT technology and companies doing new, unique things that can clearly demonstrate a substantial IoT connection. (Like today’s M2M discussion of DDS.) But for the moment, characterizing all the companies claiming an IoT connection feels a tad too quixotic. I hate embarking on something and then backing off… but… there you have it.

Figure.png

Tags :    0 comments  
Mar 20, 2014

More Common-Process MEMS

posted by Bryon Moyer

Last year we took a look at a couple of proposals for universal processes from Teledyne/DALSA and CEA-Leti that could be used to make many different MEMS elements, trying to move past the “one product, one process” trap. We’ve also reported on the AMFitzgerald/Silex modular approach and their first device.

Well, the first design using CEA-Leti’s M&NEMS process has rolled out: a single MEMS chip with three accelerometers and three gyroscopes designed and implemented by Tronics. They’re not quite the smallest of the 6-DOF sensors, but they claim that, with more optimization, they will be. Right now their die size is 4 mm2. And they say that all main parameters are on track with their simulation models.

But this is just the first functional version; they’re going back to work some more while, at the same time, giving it a companion ASIC, releasing them at the end of this year.

They’re also using the same process to create a 9-DOF sensor set, with all of the sensors on a single MEMS chip. Also for release at the end of the year. And, the idea is, that, if they wanted to, they could also include a pressure sensor and a microphone, since they can all presumably be made on this same process. Yeah, you might wonder whether integrating a microphone with those other sensors has value; even if it doesn’t, being able to make it separately using the same process as the n-DOF chip still brings a huge level of manufacturing simplification.

These efforts, if successful, could represent a fresh breath of efficiency for some of the oldest sensors in the MEMS world. The industry also has new MEMS elements in the works, like gas sensors and such. If a standard process like this could be used for new sensors as well, then at some point new sensors could launch on standard processes rather than having to do the “one process” thing first like accelerometers and their ilk have done.

There are those who believe that these standard processes are too restrictive to allow the design of sensors with arbitrary characteristics. We’ll continue to keep an eye on this stuff to see whether these common-process skeptics can eventually be appeased or whether they’ll be proven correct.

Check out the details in Tronics’s release.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register