Apr 30, 2014

Shipping Data Between Things and the Cloud

posted by Bryon Moyer

As companies rush out to take advantage of the Internet of Things (IoT), platforms are popping up all over. We looked at some of the companies participating a while back, trying to impose some structure on the chaos, but the thing is, everyone has a different idea of what a “platform” is. The common denominator seems to be that some aspect of the IoT is abstracted away, making it easier and cheaper to get up and running. Which is a good thing. The confusing part is which platforms contain which elements.

At EE Live, I got a chance to talk with Xively (a company that did appear briefly in the prior piece). They offer a platform that focuses on communication, which I knew, but I didn’t have a good sense of what that meant. Even in the early discussion, it was tough to calibrate – there are a bazillion buzzwords coined by the IoT, and if you’re not smack-dab in the middle of it, it can be impenetrable. Even once you get calibrated, the buzzwords are overloaded, so you can still think you understand something when, in fact, you don’t.

I think Xively provides a good example of taking the generalities – a “platform” – down to more specifics. In my mind, there are three levels you can work at when setting up communication between a Thing and the Cloud.

At the most basic level, you have the formal communications protocols – WiFi, Ethernet, TCP/IP, etc. The good news about those is that they’re well established and there are lots of solutions available.

The challenge is that, to use them, you typically need lots of fiddley code to establish a connection, get sessions up and running, and then do something useful with the data. Yes, libraries and stacks may be available, but, given the number of people trying to make this part easier, it’s pretty clear that working at this level can be a pain in the tuckus for the uninitiated.

So the next level up is where you can abstract that away: by providing a generic data handling layer. Some – like Xively – might call this a “channel.” At this level, you have higher-level commands that establish connections, wrapping all of the detail required at the protocol level. It’s more of a one-step-and-you’re-on kind of thing. Data is unformatted and has no semantics – it’s just data.

You can take things one level higher and provide business objects. This is more than data: it’s data in a context; it’s semantic data. At the generic level, a payload may contain the temperature setting of a thermostat or an image from a surveillance camera. At the business object level, only a thermostat object can have the temperature setting and only a camera object can have the image.

Drawing.png

As a programmer, you program at the business object level. Depending on your resources, you might not do literal object-oriented programming, but presumably you think at the level of a business object. The question is, when communicating with the Cloud, at which level do you inject your data?

  • If all you have is the protocol, then you have data marshalling and all kinds of details to package up your message, and then you have to unpack it on the other side.
  • If you have the generic data level, then you take your data and ship it to the other side in a message of some sort. The other side has to know what’s coming and what to do with it – after all, it’s just generic data when it arrives. But protocol details are replaced with simple “read” and “write” types of concepts.
  • If you actually have formalized business objects available, then you simply ship some semantic element and the other side automatically knows what it is and where it goes.

In this specific case, Xively provides the generic data “channel.” There are no semantics, but the messy protocol details are abstracted away.

Note that this doesn’t mean that Xively provides the entire stack up to and including this generic data level. You implement your own protocol stack (or someone provides their version of a platform that includes this), and you then have it link to the Xively layer. This, of course, implies ecosystem. As a case in point, LogMeIn, a full-up end-to-end communication solution, uses the Xively platform, and they just announced that they’re joining TI’s IoT ecosystem.

The high-level lesson learned is that, when someone offers up a platform, make sure you understand in great detail what’s in the platform and what’s not. It’s not so much that “the platform with the most stuff wins” – maybe, maybe not – but it’s about not being surprised later.

Tags :    0 comments  
Apr 29, 2014

Meshing With Bluetooth

posted by Bryon Moyer

A while back, we took a look at what seemed to be the dominant two radio protocols in new Internet-of-Things announcements: Bluetooth Low Energy and WiFi.

Which resulted in Zigbee raising their hands and doing a virtual “Ahem…”

So I followed up with a discussion of Zigbee, and the ensuing LinkedIn discussion was passionate (and not necessarily kind to Zigbee).

Emotions and ease-of-use issues aside, the biggest differentiators appear to be range (Zigbee wins) and mesh capability (Zigbee has, Bluetooth doesn’t).

If distance is your issue, then meshing gives you an extra distance bonus, since nodes need only be near each other; the Mother Node or hub or whatever can be much farther away. Traffic will arrive at its destination not through the air in a hub-and-spoke manner, but through the network. The only consideration here is the fact that all of this traffic will be traveling through the nodes, which otherwise would be handling only their own traffic. With a mesh, they also have to route other traffic as well.

So if this sort of configuration is what you need, then it would seem that Zigbee would be the only obvious solution.

Or would it?

CSR has introduced what they call their Smart Mesh. And it’s not Zigbee: it’s built over Bluetooth Smart. Why go through all that effort to do something Bluetooth wasn’t originally designed to do? It goes back to the reason I thought Bluetooth and WiFi were dominating: they’re in smartphones, and Zigbee isn’t.

This adds yet one more wrinkle to the distance scenario above. Yes, if you have a Bluetooth hub, a mesh will give you network reach far beyond what that hub could do on its own. But with the phone, you now have a “mobile hub,” if you wish. As long as you’re in range of one of the nodes, anywhere in the network, the phone can access the network for information or control.

You can find out more about CSR’s specific solution in their announcement.

Tags :    0 comments  
Apr 24, 2014

Mentor Unifies Verification

posted by Bryon Moyer

Seems like verification unification is in the air. We saw it recently with Synopsys, and now we have a move from Mentor.

While Synopsys’ version looked like an effort to unify acquired technology, Mentor’s efforts seem more internal. The big picture involves the unification of simulation, formal, emulation, and virtual prototyping under one umbrella, one interface. In that scheme, Mentor presents each of the technologies as an engine serving the higher-level verification goal; no longer is each one of these things a separate tool.

But a big part of what’s happening here is about conjoining emulation and simulation more seamlessly – trying to unify them to a greater extent. We actually looked at Cadence’s version of this some time back, when the verification environment was even more fractured and confusing. But the high-level picture of this relates to making the distinction between simulation and emulation more transparent.

In concept, emulation should just be a faster simulation engine, and you should be able to push the pieces of your design around between the simulator or the emulator – or the virtual prototype – based on what needs to be verified in the greatest detail and how many cycles are required either to run the tests themselves or to support the tests (like getting past the boot-up sequence quickly).

In practice, of course, emulators are hardware, and so only so much of the testbench can move from a virtual environment into real hardware – the so-called synthesizable subset. That requires care on the part of verification engineers to support a flexible testbench, and it’s also the point of new verification IP (VIP) that Mentor has included as a part of this announcement – VIP that transitions more easily between simulation and emulation.

So a big part of what’s new here is in Mentor’s Veloce emulator: their new OS3. And there are several new Veloce capabilities that are important for supporting this unification:

  • It supports a more simulation-like interaction.
  • Assertions can now be synthesized.
  • It now tracks coverage.
  • It supports Mentor’s push to move emulators out of the lab and into the data center for more effective sharing and better machine utilization, including multi-tenanting on a single machine.

Two supporting tools help with this. One is VirtuaLAB, which, somewhat surprisingly, was presented as new, but which we actually saw almost exactly two years ago. This is about eliminating rate matchers when generating “real-world” stimulus for verification. The VirtuaLAB boxes can also go into the data center as general stimulus generators, eliminating the need for someone to be physically present in a lab connecting wires to get data.

The other supporting tool is CodeLink, which we saw quite some time ago. While it’s supported offline simulation debug all the while, it now supports offline emulation debugging and review as well.

There’s actually a subtle consideration here for designs underway on existing Veloce machines. In theory, if you migrate your emulators into a data center and start sharing them on the new OS version, it’s likely this will happen in the middle of some design (it’s impossible to imagine a big company where all projects magically finish at the same time, creating an opportune window for change). The thing is, making changes in the middle of a design project is generally not great for schedule confidence. But Mentor assures us of full backwards compatibility, so that verification plans being executed under older expectations should work just as if nothing had changed.

Meanwhile, they’ve announced a new unified debugger called Visualizer that supports all of the engines, removing the need to move between debuggers when moving between engines.

And, in another trend, simulation results are all stored in a single database, regardless of which engine generated them.

This whole unification movement reflects what’s happening on chips themselves: SoCs now integrate pieces that, in earlier times, would have been created, verified, and debugged separately. And with smaller chunks, you could use separate tools for separate parts of the verification plan. But that’s just not feasible now that every aspect of every circuit has to be known to work properly before cutting an outrageously expensive mask set.

You can get more info in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register