May 20, 2014

Who Uses Power-Over-Ethernet?

posted by Bryon Moyer

Maxim recently released a reference board (called Pasadena) that implements power-over-Ethernet (PoE). As we were discussing it, I inquired about who is really using PoE. I mean, I’m familiar with it, and yet I hardly ever hear anything about it actually being used.

Figure.jpg

(Image courtesy Maxim)

They pointed out three specific target markets:

  • Wireless routers
  • Cameras
  • Point-of-sale (PoS… no, not that PoS) terminals. Cash registers, for most of us.

The top two share characteristics in that they’re likely to be positioned in some inconvenient place. PoS terminals… not so much. But they all a) need power and b) send data. So why not do both on the same wire?

This makes perfect sense, by definition, for a wireless access point. That device has no choice but to have a wired connection into the network. That’s what it does – aggregate wireless onto a wire. So might as well bring power in as well on that same wire if it can handle the load.

For the other two, the calculus depends on whether communication is wireless or not. Cameras have a lot of data to send, so a wire might make sense. But if it’s wireless, then it will still need a wire for power unless it can harvest (or take advantage of distance wireless power, which we’ll talk about soon… not clear it can provide enough power for this though). So, assuming PoE can provide the power, you can either use wired-communication-and-power or wired power/wireless communication. It’s a single wire either way.

Portable PoS terminals are typically wireless, but they don’t count because they’re also battery-powered; they get zero wires. So for fixed PoS terminals, you’ll always need at least one wire. If you’re using wired Ethernet, then you’ll need that – might as well bring power along for the ride (again, if there’s enough juice).

Part of this is simply the logic that one wire is easier than two. While true, that understates the issue, because the two wires are not the same. Stringing Ethernet comes with far fewer code requirements than stringing 120 (or 240) V. One you can literally do yourself; the other requires an electrician. (Seriously, I know you’re a smart guy… hire an electrician anyway.)

All of this said, I certainly don’t have visibility into a lot of PoE actually being deployed. Feel free to comment below either if you or someone you know is deploying it or if there’s some other barrier that’s getting in the way.

There’s more info on the Pasadena board here.

Tags :    4 comments  
May 13, 2014

IoT Via WiFi

posted by Bryon Moyer

We recently looked at levels of data communication in the Internet of Things (IoT) and established three levels:

-          Formal communications protocol level (e.g., TCP/IP)

-          Generic data level (e.g., Xively)

-          Business objects

At the recent Internet of Things Engineering Summit, I talked with another company that illustrates some of how this works. They’re called Econais. (I keep seeing this as looking French, and I want to pronounce it “eh-koh-NAY” – but that’s wrong: it’s a Greek company, and it’s pronounced “ee-KOH-ness”).

Econais recently announced a new module for connecting Things to WiFi. And the focus is on making integration easy: with 20 lines of code, you can connect to a local WiFi network. Assuming your Thing doesn’t have a screen (and, like a motion detector, might even be mounted someplace inconvenient), your phone acts as the keyboard, launching Thing code that gets connection information from the access point. This is part of their ProbMe (“probe me” – named after its pinging capability) in-situ management system.

Because Econais implements standards like WiFi and TCP, with no further abstraction, it occupies the comms protocol level (i.e., the first of the three above). But they also partner with Xively, who lays over the protocol level. In fact, for a programmer, both APIs are then available: you can write at the detailed Econais level or at the more abstracted Xively level.

Drawing.png

The overall idea here is that you can get onto the network easily with Econais, but you can then manipulate data more easily at the Xively (or whoever lies above this) level. Of course, the WiFi only goes as far as the access point; to get to the cloud, you then transition to the various other wired (or even wireless) comms protocols that make up the Internet.

Econais actually has two families of WiFi module, the 19D01, which doesn’t have an MCU in it (so presumably you attach it to your Thing that already has an MCU) and the recently-announced 19W01, which includes an MCU as well as integrated FLASH and an antenna. It’s all a bit confusing since, at the time of this writing, these distinctions aren’t clear on the website or some of the graphics. But size is an important selling factor for them: the MCU-less version is an 8-mm square module; the W01 is 14 mm x 12 mm.

And, just as I was preparing to post this, notice came in of a new Lantronix WiFi module for Arduino boards. So it slides into the same category. It is larger, at 24 mm x 16.5 mm.

For more info on Econais’s new W01 board, check out their announcement; for Lantronix, you can find their announcement here.

 

Update (5/14/14): I have some more clarification on the Econais integration story.

  • There's an EC32L module that has an MCU separate from the WiFi chip.
  • The EC19W products integrate the MCU in with the WiFi chip, although the MCU is still available for developer programs. Some of the other hardware interfaces (A/D, GPIO, etc.) are reduced vs. the EC32L.
  • Both of these include FLASH and an antenna, so they're certified by the various international organizations.
  • The EC19D excludes the FLASH and antenna. It's therefore not certified (but presumably a system including it would need to be).
Tags :    0 comments  
May 07, 2014

How Does Multicore Affect Code Coverage?

posted by Bryon Moyer

Multicore systems can be a b…east to verify code on, depending on how you have things constructed. Left to, say, an OS scheduler, code execution on your average computer is not deterministic because of the possibility of interruption by other programs or external interrupts. So it becomes nigh unto impossible to prove behavior for safety-critical systems.

Lesson #1 from this fact is, “Don’t do that.” Critical code for multicore must be carefully designed to guarantee provably deterministic performance. But lesson #2 is, when tools claim to analyze multicore code, you have to ask some questions to figure out exactly what that means.

Which is what I did when LDRA announced new multicore code coverage analysis. This kind of analysis invariably involves instrumentation of source code, which, by definition, exacerbates concerns about determinism. So what does this mean in LDRA’s case?

I got to spend a few minutes with one of their FAEs, Jay Thomas (yes, they were actually trusting enough – of both of us, frankly – to let an FAE talk to press) to get a better understanding of what’s going on.

First of all, the scope of the analysis is coverage – determining whether or not a particular piece of code got executed. This is conceptually done by adding a bit of code to (i.e., instrumenting) each “basic block.”

A basic block is a straight-line set of code statements without any branches. Because there are no branches, then if you enter the block, you know that every line in that block got executed. I suppose, thinking out loud here, that if you put the extra instrumented code at the start of the block, then an interrupt or an unscheduled stop might invalidate the proof; if you placed the instrumentation at the end of the basic block (in blue in the figure), then, by reaching it, you can reasonably assert that you had to have executed the prior instructions to get there.

Drawing.png

The coverage is tracked in a scoreboard-like matrix, and so “checking off” a block involves setting a value in a position of the matrix that corresponds to the block just executed.

The challenge here is performance. A straightforward “index into a matrix” operation involves calculation of target addresses each time. This may sound trivial, but, apparently it adds up. And multicore makes it worse, not only because you might expect such new programs to be bigger, but because now you have the possibility of collisions. We’ll talk about collisions in a second, but let’s first address performance.

In order to reduce this computational overhead, LDRA implements code that pre-calculates destination addresses at compile time. I haven’t seen exactly how that works, but the effect is analogous to changing an indirect store to a direct store operation. This apparently saves lots of time during program execution.

That aside, let’s return to the collision question. There’s one big scoreboard for the entire program, not for each core. So two cores might try to write at the same time – an impossible situation for a single-core system. There’s some nuance to this, since you might think that memory controllers should hide the fact that two memory requests are made at the same time.

There are lots of ways to design a scoreboard, but for compactness, LDRA packs bits. The memory controller can manage separate words or bytes (or whatever its granularity is), but it can’t manage bit-packing. So if two cores attempt to set bits that happen to be packed into the same word, then there’s an unresolvable collision. And performance means that you don’t want one to be waiting around until the other finishes. (And I can’t imagine what the ugly performance impact would be if you naively tried to spawn separate non-blocking terminal threads for each of those writes to unblock the testing of the code…)

The way LDRA deals with such collisions is to abandon an attempt to check a bit in a word that’s already in use by some other check-off. First come, first serve. In fact, first come, only serve.

This means that, even though the instrumentation says to “check off the block,” it may not actually happen if you collide with a different core checking off a different block. For this specific instance, you could consider this a “false positive.” In other words, if you immediately used the resulting bit values to determine whether or not the block got covered, it would say that it didn’t get covered, when in fact it did – it’s just that the logging operation failed.

This is conservative behavior: critically for mission-critical software, it won’t create a false negative. Said differently, coverage tracked in such a way might be better than indicated; it won’t be worse. That’s important to know.

But still, false positives aren’t fun. No one wants to go through a list of “fails” only to find that they weren’t, in fact, fails. It takes a long time to do the analysis, and you end up with this long exception list that just feels… messy, especially when you’re trying to build confidence in the code.

There are two solutions to this issue. The first is to do nothing – literally. Embedded programs love loops, so you may fail to check off a block during one loop iteration; no problem, you’ll probably hit it the next time. For this reason, even though an individual write might indicate a false positive, by the time you’re done executing the entire program, most of those will likely have disappeared.

But there still could be some stragglers remaining. In order to deal with that, LDRA provides control over how many bits get packed into a word. If you make each word sparser, then there are fewer possible collisions. The limit is to have one word per matrix cell. At that level, the memory controller can manage the collisions, and you’re good to go. The cost, of course, is the size of the matrix.

You can find more in LDRA’s announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register