posted by Bryon Moyer
My first exposure to the details of sensor design came at ISSCC several years ago. I watched a series of presentations that were, in reality, over my head. I did a series of articles on them, but it took a lot of study afterwards for me to figure out all the things that were going on, and amongst those things, which were most important.
Much of that was due to the circuitry used to amplify, filter, linearize, and stabilize the sensor signal, which starts out as a tepid electrical anomaly and gets boosted into actual data. And one common thread was the use of chopping circuits.
I could easily get out of my comfort zone here, but my high-level summary of choppers is that they accomplish (at least) two things. First, in the (likely) event of ambient noise, you reject much of it because you’re only sampling the signal some of the time. Any noise that shows up when you don’t happen to be connected makes it no further down the signal chain. And the sampled values get averaged, further reducing noise.
In addition, where you have differential signals, you can eliminate bias by switching the polarity back and forth. What was an added bias in one cycle becomes a subtracted bias in the next cycle, and the averaging eliminates it.
This is all analog stuff, and I tread lightly here – which was why those original sensor stories were a challenge to do. Whether or not I understood all of the circuits in detail, what was clear to me was that an important part of the sensor design was the analog circuitry that accompanied it, and a common, useful part of that was the chopping concept. I’ve taken it on faith that many sensors have such circuits buried away on their ASICs.
And then Honeywell releases a new Hall Effect sensor, bragging, among other things, about the fact that it uses no chopping. I thought chopping was a good thing, and they’re making it out to be a bad thing. What’s up with that?
To be clear, their new sensor isn’t the first to do this. They’ve talked about the value of chopper-less sensors for at least a couple years now. The new sensor reduces cost and package size, but I have to admit that it was the chopper discussion that caught my attention.
These sensors are used, among other things, for brushless DC motors. I frankly haven’t worked on a motor since high school, but even I remember that the motor carried a brush from the stator to the rotor to tell when the field needed to switch. Brushless motors replace the brush with sensitive magnetic sensors to determine the rotor position and, from that, figure out when it’s time to reverse the field that’s pulling the rotor around.
Optimal motor control involves careful timing, and ideally, your motor control circuit can respond instantaneously to the field measurements for a tight control loop. But, in fact, the calculations take finite time, meaning that the response lags slightly. The less the lag, the more efficient the operation. (You could argue that the algorithm should just project forward slightly based on trajectory – perhaps that’s possible, although it’s more complex, and if the trajectory were that predictable, you wouldn’t need to measure all this in the first place.)
And that’s the issue with the chopping: the fact that you’re sampling and averaging adds to the calculation time. Not by a ton, but more than if you weren’t chopping. The increased lag time makes it harder to optimize the motor control.
Secondarily, the chopping circuits also create electrical noise based on the chopping frequency. That either radiates or has to be filtered. May or may not be a big issue depending on the application.
OK, so if you go chopper-less, then how to you get stability and sensitivity? Honeywell addresses stability by using four Hall elements arranged in the four cardinal directions, so to speak. This washes out biases in a particular direction.
As to sensitivity, well, they say they have “programming” that accounts for package stresses and other noise contributions so that the small signal of interest can be more confidently extracted. Some secret sauce going on here…
And so they highly recommend chopper-less Hall Effect sensors for commutation of brushless DC motors (and other applications). Actually, to be specific, they recommend their chopper-less sensors. Whose details you can read more about in their latest announcement.
posted by Bryon Moyer
Intersil has announced a new RGB sensor, and they’ve laid out some of the things that it’s good for. But let’s back up a sec before diving in.
RGB sensors sound pretty straightforward, and their utility seems pretty obvious. But they’re not the only light sensors in town, so first let’s position them with respect to other light sensors on that system that everyone wants a piece of: the smartphone.
There’s already an RGB sensor on your phone: it’s in the camera. It’s on the back of the phone, typically. On the front side, there are two other light sensors. There’s the ambient light sensor (ALS), which simply detects light intensity so that it can decide how much to dim the screen (and other things like lighting a keyboard). The main difference between an ALS and an RGB sensor is that the RGB sensor provides three channels of data for the three colors; the ALS just gives one number. And the ALS typically costs less, although Intersil sees cost parity on the horizon.
The other light sensor on the phone is the proximity sensor; it combines an IR LED with an IR detector to figure out whether you’re holding the phone to your cheek so that it can disable screen functionality. While Intersil’s new sensor actually can detect down into the near infrared range, it can’t function as the detector in the proximity sensor, so we won’t travel down that path any further.
If you subscribe to cost parity between ALS and RGB sensors, then having and RGB sensor handle the ALS function can help with the various kinds of glass now being used on phones. Colored glass in particular can complicate detection, so a tunable RGB setup can help deal with that. Sensitivity is important here, since the glass on most phones can block up to 90% of the light.
Light intensity is measured in “lux” units; full daylight is 100K luxes; moonlight is 1 lux. But if you only get 10% of the light, then full daylight will look like 10K lux behind the glass. So they focused on sensitivity up to that range, going down to 0.005 lux at the bottom end for dark environment performance.
So that’s phones, but there are other ideas afloat as well. Color calibration of screens to printers is one. More interesting, since it’s a dynamic application, is compensation for display aging. In particular, with OLEDs, the blue color component is newer and less “perfected,” so it ages more quickly than the others. That means that the color mix changes over time.
Using an RGB sensor, that change can be detected and compensated by boosting blue power to keep consistent color over the life of the screen. They can sync with multiple sensors for the case where the display is divided into regions, each with its own sensor. The same can be done for projectors (minus the regional thing).
Meanwhile, LEDs are bringing some fundamental changes to room lighting. In the past, there were a couple discrete choices for light “temperature” (a measure of the “whiteness” color): incandescent and fluorescent. But LEDs can be tuned to some extent, and individual LEDs also vary, so identical arrays of LEDs might have different temperatures.
For all of these cases, an RGB sensor can help detect the temperature and maintain consistency both unit-to-unit and over time. You could even change the lighting color to suit your mood.
In a camera, a sensor can detect the ambient light temperature and compensate the exposure accordingly. In fact, by having a separate sensor do that, the exposure can be pre-computed, significantly reducing some of the lag time between button-push and picture-take.
From a spec standpoint, Intersil says they’re differentiated by three characteristics: size, accuracy, and – of course – power.
They claim the smallest device, with a 1.65x1.65-mm2 package. Their accuracy (“total error” on the data sheet) is 10%, as compared to 15% and higher with other parts.
Power is specified at less than 1.45 µA quiescent, 85 µA active. Those are the max numbers; other numbers floating around reflect typical numbers (and they have measured against their competition to establish who has lowest typical power as well – they say they have the data to prove that they do).
You can find more in their announcement.
posted by Bryon Moyer
Synopsys recently announced their HAPS DX (Developer eXpress) product, and the story surrounding that release spoke to many of the things that Synopsys sees as good in their prototyping solution. But a few questions clarified that many of those things have already been available in the existing HAPS offerings. So what’s the key new thing that HAPS DX enables?
Turns out it has to do with the distinction between designing IP and designing an SoC. And this is actually a theme I’m seeing in other contexts as well.
IP started out as mini-designs that were built with the same tools as a full-up chip (or FPGA). Frankly, for a lot of IP companies, the products on the shelf probably wouldn’t have worked for any arbitrary application: they’d need tweaking first. So these products were largely a way to get consulting contracts that would modify the shrunk-wrap IP into something that included all the specifics the client needed.
Even then, folks looked askance at IP, preferring to do it themselves for NIH and control reasons as well as due to the illusion that inside folks were free (or at least already paid for). IP company survival was not a given.
Today it’s assumed that any designer of an SoC will spend a lot of effort (and money) integrating IP; it’s no longer cool to invent a new wheel. But this has changed the nature of design. While full chip design used to be just a bigger version of the process used to design IP, now IP is more about low-level gate design and SoCs are more about assembly (with lower-level design where absolutely necessary).
So now there’s more of a break between where the IP design stops and the SoC design starts, and tools are starting to reflect the challenges of this change of methodology. And that’s the main benefit to the HAPS DX product: it allows for a more seamless transition from IP design to SoC design.
Before, one person might design and verify the IP, and the user then started from scratch, redoing much of the work that the original IP designer did when prototyping. HAPS DX, by contrast, is supposed to help bridge that gap, allowing a more seamless move from IP to SoC with data generated in the IP phase pushed forward for re-use when that IP is integrated.
You can see more of what they’re saying in their announcement.