Feb 18, 2014

UV Index Sensor (and Gesture Recognition)

posted by Bryon Moyer

Have you been out in the sun too long?

OK, yeah, not really the right time of year to ask that question north of the equator… Especially around here in the Northwest, under a thick blanket of puffy gray.

So the answer is probably, “No.” But, come springtime, you’re going to want to get all of that flesh exposed to suck up those rays it’s been missing during the Dark Months. So… how do you know how long to stay out? Other than the telltale pink that indicates you’re too late?

What if your wearable device could measure that for you? That’s the goal of a couple of new Silicon Labs optical sensors: the Si1132, combined with an ambient light sensor (ALS), and the Si1145/6/7 devices, which include and ALS, IR proximity detector, and one or more LED drivers. All in clear 2x2 mm2 packages.

To some extent, you might just say that this is just a photodetector that responds in the UV range. But you’d then look at the block diagram and notice that there’s no UV photodiode shown.


I asked about that, and it turns out that their visible light detector also responds to UVA and UVB, and they use proprietary algorithms to extract the UV index from them. You could do the same thing today (if you had the algorithms), but you’d need to get a plain UV detector and do the index calculation yourself using separate devices. With these devices, it’s integrated, and what you read out is the pre-calculated index.

Note also that there’s nothing in that diagram for accumulating exposure. That’s because the device doesn’t actually do that; it just gives a real-time UV index reading that the system designer can accumulate to determine overall exposure.

The LED drivers in the Si1145/6/7 series are summarized as using the 1-LED version for motion detection, 2 LEDs for 2D gesture recognition, and 3 LEDs for 3D gesture recognition. The LEDs are driven under control of this device, while the device senses the response. It also has its own IR emitter for proximity checking.


You can find more information in their release.

Tags :    0 comments  
Feb 13, 2014

The Case for Zigbee

posted by Bryon Moyer

Not long ago I did a piece on wireless technologies. It was stimulated by the fact that BlueTooth Low Energy (BT-LE) seems to be on everyone’s “new support” list. While I didn’t pan Zigbee per se, it also didn’t figure in my analysis, and, frankly, it came up only with respect to complaints some folks had had about how hard it was to use.

Since then, I’ve had some discussion with the good folks from Zigbee, and they make a case for a scenario involving WiFi, BT-LE, and Zigbee as complementary technologies sharing the winnings, as contrasted with the two-protocol scenario I posited.

The challenges I raised included the ease-of-use thing and the fact that Zigbee wasn’t making it onto phones, and phones seemed to be figuring pretty prominently in most home networking scenarios. We talked about both of these.

With respect to Zigbee being hard to use, they don’t really dispute that. Actually, “hard” is a relative term – they see it as a comparison with WiFi, which can be easier to implement (at the cost of higher power, of course). Their primary point here is that WiFi implements only the bottom two layers of the ISO stack, relying on other standards like IP, TCP, and UDP for higher-level functionality.

Zigbee, by contrast, covers the first five ISO stack layers. So when you implement it, you’re not just getting the low-level stuff going; you have to deal with network-level and session-level considerations. Now… you could argue that you still have to implement all five layers with WiFi; it’s just that you’re going outside the WiFi standard to do so.

Add to this the details of specific types of devices, and it would seem the complexity goes up – yet perhaps not. Neither Zigbee nor BT-LE is generic enough to allow simple swapping of devices. Zigbee has device type profiles to account for this: these are essentially device-level semantics that standardize how a particular device type interacts with the network.

Their claim is that BT-LE has the same kind of device-dependency, only there are no established profiles yet. Each pairing essentially gets done on its own. So Zigbee might look more complex due to all the extra profiles – while, in fact, that’s actually a benefit, since BT-LE doesn’t have them but needs them.

I don’t know if these explanations any consolation to folks struggling with the tough task of implementing Zigbee; if the benefits are there, then the effort will be rewarded. If not, then it becomes a target for something less painful.

So what would those benefits be? The one clear thing is that Zigbee has far greater range than BT-LE. But it also supports much larger networks, and ones that can change dynamically. And this is where the whole phone thing comes in. They see BT as largely a phone-pairing protocol. One device, one phone. Like a wearable gadget or a phone peripheral. Not a full-on network.

How does that play into home networking and the Internet of Things? Here’s the scenario they depict: Within the home, the cloud connection comes through WiFi, and in-home communication happens via WiFi (to the phone) and Zigbee (between devices and to whatever acts as the main hub). Outside the home, the phone becomes critical as the way to access the home, but then it uses the cellular network.

In other words, for home networking, they see no real BT-LE role. They divide the world up as:

  • WiFi for heavy data and access to the cloud;
  • Zigbee for home and factory networks; and
  • BT-LE for pairing phones with individual gadgets like wearables.

This is consistent with the fact that Zigbee isn’t prevalent on phones, since phones typically don’t participate in Zigbee networks. In their scenario, the phone component of the home network happens outside the home on the cellular network.

Obviously Zigbee has been around for much longer and has an established position in home and factory networking. The question has been whether they would hold that position against other standards that are perceived as easier to use.

Their rationale makes sense, but designers aren’t always well-behaved. Even though, for example, BT-LE might not have the same full-on networking capabilities as Zigbee, some stubborn engineers might, say, implement in-home BT-LE as pairings between a hub and devices, letting the hub manage the devices rather than having a distributed network. And they might also stubbornly have devices connect directly to a phone within the home directly, rather than having the phone use WiFi to talk to a hub that uses something else to talk to the device.

Kludges? Bad design decisions? Who knows. There are so many considerations that determine winners and losers – and, so often, non-technical ones like ecosystems and who played golf with whom can have an outsized impact. If a less elegant approach is perceived to be easier to implement, it could win.

That said, Zigbee has made a cogent case for their role. Will designers buy in?

Tags :    0 comments  
Feb 06, 2014

New Hall Effect Sensors Sans Choppers

posted by Bryon Moyer


My first exposure to the details of sensor design came at ISSCC several years ago. I watched a series of presentations that were, in reality, over my head. I did a series of articles on them, but it took a lot of study afterwards for me to figure out all the things that were going on, and amongst those things, which were most important.

Much of that was due to the circuitry used to amplify, filter, linearize, and stabilize the sensor signal, which starts out as a tepid electrical anomaly and gets boosted into actual data. And one common thread was the use of chopping circuits.

I could easily get out of my comfort zone here, but my high-level summary of choppers is that they accomplish (at least) two things. First, in the (likely) event of ambient noise, you reject much of it because you’re only sampling the signal some of the time. Any noise that shows up when you don’t happen to be connected makes it no further down the signal chain. And the sampled values get averaged, further reducing noise.

In addition, where you have differential signals, you can eliminate bias by switching the polarity back and forth. What was an added bias in one cycle becomes a subtracted bias in the next cycle, and the averaging eliminates it.

This is all analog stuff, and I tread lightly here – which was why those original sensor stories were a challenge to do. Whether or not I understood all of the circuits in detail, what was clear to me was that an important part of the sensor design was the analog circuitry that accompanied it, and a common, useful part of that was the chopping concept. I’ve taken it on faith that many sensors have such circuits buried away on their ASICs.

And then Honeywell releases a new Hall Effect sensor, bragging, among other things, about the fact that it uses no chopping. I thought chopping was a good thing, and they’re making it out to be a bad thing. What’s up with that?

To be clear, their new sensor isn’t the first to do this. They’ve talked about the value of chopper-less sensors for at least a couple years now. The new sensor reduces cost and package size, but I have to admit that it was the chopper discussion that caught my attention.


These sensors are used, among other things, for brushless DC motors. I frankly haven’t worked on a motor since high school, but even I remember that the motor carried a brush from the stator to the rotor to tell when the field needed to switch. Brushless motors replace the brush with sensitive magnetic sensors to determine the rotor position and, from that, figure out when it’s time to reverse the field that’s pulling the rotor around.

Optimal motor control involves careful timing, and ideally, your motor control circuit can respond instantaneously to the field measurements for a tight control loop. But, in fact, the calculations take finite time, meaning that the response lags slightly. The less the lag, the more efficient the operation. (You could argue that the algorithm should just project forward slightly based on trajectory – perhaps that’s possible, although it’s more complex, and if the trajectory were that predictable, you wouldn’t need to measure all this in the first place.)

And that’s the issue with the chopping: the fact that you’re sampling and averaging adds to the calculation time. Not by a ton, but more than if you weren’t chopping. The increased lag time makes it harder to optimize the motor control.

Secondarily, the chopping circuits also create electrical noise based on the chopping frequency. That either radiates or has to be filtered. May or may not be a big issue depending on the application.

OK, so if you go chopper-less, then how to you get stability and sensitivity? Honeywell addresses stability by using four Hall elements arranged in the four cardinal directions, so to speak. This washes out biases in a particular direction.

As to sensitivity, well, they say they have “programming” that accounts for package stresses and other noise contributions so that the small signal of interest can be more confidently extracted. Some secret sauce going on here…

And so they highly recommend chopper-less Hall Effect sensors for commutation of brushless DC motors (and other applications). Actually, to be specific, they recommend their chopper-less sensors. Whose details you can read more about in their latest announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register