Feb 19, 2014

Synopsys Does… Software?

posted by Bryon Moyer

Synopsys has gone shopping again, and this time they went to a completely different mall. They came back with Coverity.

Just another acquisition? Nope… This one seems different.

Synopsys has had their nose to the grind working on chip design since… well, since their very beginning. Who knows how many acquisitions they’ve made (I’m sure someone’s kept count, but it’s not something I pay that much attention to), but all of them have had something to do with chip design.

There was possibly one chink with the EVE emulation acquisition: it’s about chip design too, but it allows more thorough validation of software on an SoC platform before that platform is committed to masks. (Although, if the rumors are true, the acquisition has decidedly not been good for the EVE product line…)

And, if you were paying close attention, you saw the key word: software. Seems that every now and then, CEO Aart de Geus might get a question about other areas that Synopsys might play in, including software. And each time it was a categorical answer that reinforced the steady Synopsys focus on chips, chips, chips.

But Coverity isn’t about designing chips. They’re about finding and fixing bugs in software. And not just software that will run on an SoC. I mean, yeah, that software, but also, any software for running anywhere. Their core customers are companies writing software that has to be clean and defect-free. (In other words, not Microsoft. OK, sorry, that was a cheap shot… far too easy…) Mission-critical software, where failure could cause harm or damage or a recall.

Their technology is based on static analysis techniques developed and spun out of Stanford. They’ve focused on integrating with large-scale development platforms, with complex build sequences and code bases. Decidedly not the kind of thing that’s been going on in the halls of Synopsys.

So this seems to indicate a significant change of direction for Synopsys. If not a move away from chips, then at least a first strike at something other than chips. Have they given up on getting the growth they expect (or the shareholders expect) from EDA alone? Are they going to enter the larger systems markets in a bigger way? Hard to say. But it’s also hard to imagine that their Coverity acquisition would be a one-off. Seems we should be watching for other moves outside the chip space.

More details in their release

Tags :    1 comment  
Feb 18, 2014

UV Index Sensor (and Gesture Recognition)

posted by Bryon Moyer

Have you been out in the sun too long?

OK, yeah, not really the right time of year to ask that question north of the equator… Especially around here in the Northwest, under a thick blanket of puffy gray.

So the answer is probably, “No.” But, come springtime, you’re going to want to get all of that flesh exposed to suck up those rays it’s been missing during the Dark Months. So… how do you know how long to stay out? Other than the telltale pink that indicates you’re too late?

What if your wearable device could measure that for you? That’s the goal of a couple of new Silicon Labs optical sensors: the Si1132, combined with an ambient light sensor (ALS), and the Si1145/6/7 devices, which include and ALS, IR proximity detector, and one or more LED drivers. All in clear 2x2 mm2 packages.

To some extent, you might just say that this is just a photodetector that responds in the UV range. But you’d then look at the block diagram and notice that there’s no UV photodiode shown.

si1132-BD_reduced.png

I asked about that, and it turns out that their visible light detector also responds to UVA and UVB, and they use proprietary algorithms to extract the UV index from them. You could do the same thing today (if you had the algorithms), but you’d need to get a plain UV detector and do the index calculation yourself using separate devices. With these devices, it’s integrated, and what you read out is the pre-calculated index.

Note also that there’s nothing in that diagram for accumulating exposure. That’s because the device doesn’t actually do that; it just gives a real-time UV index reading that the system designer can accumulate to determine overall exposure.

The LED drivers in the Si1145/6/7 series are summarized as using the 1-LED version for motion detection, 2 LEDs for 2D gesture recognition, and 3 LEDs for 3D gesture recognition. The LEDs are driven under control of this device, while the device senses the response. It also has its own IR emitter for proximity checking.

Si114x-BD_reduced.png

You can find more information in their release.

Tags :    0 comments  
Feb 13, 2014

The Case for Zigbee

posted by Bryon Moyer

Not long ago I did a piece on wireless technologies. It was stimulated by the fact that BlueTooth Low Energy (BT-LE) seems to be on everyone’s “new support” list. While I didn’t pan Zigbee per se, it also didn’t figure in my analysis, and, frankly, it came up only with respect to complaints some folks had had about how hard it was to use.

Since then, I’ve had some discussion with the good folks from Zigbee, and they make a case for a scenario involving WiFi, BT-LE, and Zigbee as complementary technologies sharing the winnings, as contrasted with the two-protocol scenario I posited.

The challenges I raised included the ease-of-use thing and the fact that Zigbee wasn’t making it onto phones, and phones seemed to be figuring pretty prominently in most home networking scenarios. We talked about both of these.

With respect to Zigbee being hard to use, they don’t really dispute that. Actually, “hard” is a relative term – they see it as a comparison with WiFi, which can be easier to implement (at the cost of higher power, of course). Their primary point here is that WiFi implements only the bottom two layers of the ISO stack, relying on other standards like IP, TCP, and UDP for higher-level functionality.

Zigbee, by contrast, covers the first five ISO stack layers. So when you implement it, you’re not just getting the low-level stuff going; you have to deal with network-level and session-level considerations. Now… you could argue that you still have to implement all five layers with WiFi; it’s just that you’re going outside the WiFi standard to do so.

Add to this the details of specific types of devices, and it would seem the complexity goes up – yet perhaps not. Neither Zigbee nor BT-LE is generic enough to allow simple swapping of devices. Zigbee has device type profiles to account for this: these are essentially device-level semantics that standardize how a particular device type interacts with the network.

Their claim is that BT-LE has the same kind of device-dependency, only there are no established profiles yet. Each pairing essentially gets done on its own. So Zigbee might look more complex due to all the extra profiles – while, in fact, that’s actually a benefit, since BT-LE doesn’t have them but needs them.

I don’t know if these explanations any consolation to folks struggling with the tough task of implementing Zigbee; if the benefits are there, then the effort will be rewarded. If not, then it becomes a target for something less painful.

So what would those benefits be? The one clear thing is that Zigbee has far greater range than BT-LE. But it also supports much larger networks, and ones that can change dynamically. And this is where the whole phone thing comes in. They see BT as largely a phone-pairing protocol. One device, one phone. Like a wearable gadget or a phone peripheral. Not a full-on network.

How does that play into home networking and the Internet of Things? Here’s the scenario they depict: Within the home, the cloud connection comes through WiFi, and in-home communication happens via WiFi (to the phone) and Zigbee (between devices and to whatever acts as the main hub). Outside the home, the phone becomes critical as the way to access the home, but then it uses the cellular network.

In other words, for home networking, they see no real BT-LE role. They divide the world up as:

  • WiFi for heavy data and access to the cloud;
  • Zigbee for home and factory networks; and
  • BT-LE for pairing phones with individual gadgets like wearables.

This is consistent with the fact that Zigbee isn’t prevalent on phones, since phones typically don’t participate in Zigbee networks. In their scenario, the phone component of the home network happens outside the home on the cellular network.

Obviously Zigbee has been around for much longer and has an established position in home and factory networking. The question has been whether they would hold that position against other standards that are perceived as easier to use.

Their rationale makes sense, but designers aren’t always well-behaved. Even though, for example, BT-LE might not have the same full-on networking capabilities as Zigbee, some stubborn engineers might, say, implement in-home BT-LE as pairings between a hub and devices, letting the hub manage the devices rather than having a distributed network. And they might also stubbornly have devices connect directly to a phone within the home directly, rather than having the phone use WiFi to talk to a hub that uses something else to talk to the device.

Kludges? Bad design decisions? Who knows. There are so many considerations that determine winners and losers – and, so often, non-technical ones like ecosystems and who played golf with whom can have an outsized impact. If a less elegant approach is perceived to be easier to implement, it could win.

That said, Zigbee has made a cogent case for their role. Will designers buy in?

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register