Ours is a networked world. Anything that’s anything is connected to the Internet. No matter how unrelated, things somehow manage to get from here to somewhere completely different. Like the way your bank account password can magically appear in some server in an obscure corner of Russia. Or how some exalted prince in an exotic distant land like, say, Nigeria, actually knows who you are and trusts you enough to handle his money!
But it wasn’t ever so, and the infrastructure for hooking things onto the internet was once meticulously created from scratch each time. The internet protocols were (and, to some extent, still are) considered heavyweight expensive mechanisms for connecting things. So other connection standards have been used for simpler interconnections, whether hooking your printer to your computer or networking the logic analyzer with the oscilloscope in a lab.
Most of these internet connections use the TCP protocol for managed connections or the UDP protocol for “best effort” message sending, layered onto the IP protocol (TCP/IP or UDP/IP), which are most commonly carried on Ethernet. Both of these are big standards (especially TCP) with lots of complexity and lots of options. The problem with this is that all the things they have to do require a lot of attention from your processor, especially when you start using them at high speeds over something like 10G Ethernet. Your processor wouldn’t be able to keep up – and even if it could, there wouldn’t be much juice left for you to do any work.
And so the network duties have been “offloaded” to dedicated units that take on the burden of packet processing and leave the main CPU to do other things. We won’t get into the under-the-radar arcane debate about whether TCP Offload Engines (TOEs) actually result in gained efficiency; we’re going to roll with the general assumption that they do. This and everything else needed to connect you onto the Net are on your Network Interface Card (NIC) card, that innocuous looking piece of FR-4 in your computer that simultaneously gives you your freedom and identifies your computer uniquely to the rest of the world.
Because of this arrangement, NICs are practically a commodity item. They’re everywhere. And, as with any commodity, manufacturers look for the most common, least expensive implementation. That means that they will probably use common inexpensive ICs like high-volume dedicated chips, microprocessors, or even network processors if unusually high throughput is needed, and they will implement only those aspects of the standards that are required or commonly used. Since their main function is to provide a computer connection to the internet, for example, they’re going to manage TCP for reliability, meaning if they send a packet that gets corrupted and they don’t get an acknowledgment, they’ll resend it.
With TCP-UDP/IP/Ethernet being such a common configuration, and with prices coming down, even though it’s a complex scheme, it starts to get easier just to use it for connecting all kinds of things simply because the support for it is so accessible. It’s acceptable to tolerate what may be excess capability because you don’t really have to worry about the details. It all just works, which is especially nice when the interconnect isn’t the main value of your system.
Far from the madd(en)ing crowd
AdvancedIO, a maker of FPGA-based NICs, has highlighted some applications that can benefit from an Ethernet connection, but that may not be well-served by your average standard NIC card. I mean, let’s face it, they’re using FPGAs: there must be a reason for that. Yes, FPGA prices have come way down over the years, but if you’re looking for the cheapest solution, FPGAs aren’t usually the first place people turn. So if you’re going to compete with commodity NICs using FPGAs, you’d better be bringing something extra to the party.
What they point out is that there are situations where you don’t want the standard vanilla implementation of the network standards. You don’t even want chocolate or strawberry. Maybe you want rambutan with dragonfruit chunks. Or a crème-brulée-flavored base laced with dark chocolate, coffee, and caramel swirls, punctuated with a smattering of crumbled English toffee. Or even a ph?-inspired concoction, rich with chunks of rare steak, book tripe, and smooth, creamy tendon.
I’ll give you a minute to let your stomach settle.
Perhaps waxing less metaphorical would be better. AdvancedIO lays out three examples that require atypical packet handling. Two of the examples relate to systems connecting sensors; the other connecting censors. OK, well, perhaps intelligence/security types.
One of the unique characteristics of sensors and antennae is that they really don’t create packets: they just stream data. Putting this data into packets is an artificial device we use for delivery in the same way that, in some languages, we conveniently place spaces between certain groupings of letters when writing a representation of what is in fact the continuous stream of sound we call speech.
And sensors don’t hold a conversation. They’re all monologue. They don’t come up for air. They don’t let you get a word in edgewise. They don’t care if you heard them or, frankly, if you’re even listening. They might as well be pushing a shopping cart full of blankets and soda bottles down the sidewalk, expounding ad infinitum to no one in particular on observations gleaned from a lifetime of hard living. And if you missed or didn’t understand something they said, don’t bother asking them to repeat it: they’ve moved on, and it’s up to you to figure out what to make of the mental transcript you created.
And this is the key to this example. While TCP is typically set up to facilitate an accurate embodiment of a conversation, when listening to a sensor, it does no good to try to stop things and ask for a retransmission if something got garbled along the way. You’re not going to get a re-transmission; the sensor is too busy sending new data in real time. So not only will you not get a clean copy of the corrupt data, while you sit around waiting, you’ll be missing other new data. So you might as well suck it up and just accept whatever you get, right or wrong, because it’s all you’re gonna get. A standard NIC won’t operate that way.
Another scenario that AdvancedIO points out occurs when capturing sensor data that needs to be sorted out later for things like alignment of multiple-channel sensor data or for accurately-timed playback. In this case you may want to timestamp each packet when it arrives. If you really want an accurate record of the arrival time, you have to stamp it as soon as possible, meaning at a low level, right on the NIC card. You don’t want to wait until the packet gets to some higher level where software can tag the packet; the vagaries of allocating processor time to threads would make that highly variable. Likewise, when you play back, you want to push the timing down to the lowest possible layer to remove the inaccuracies of higher-level processing. And most NICs don’t support this low-level timestamp management.
Finally, they point to what they call “raw socket” usage. Typically, the Ethernet delivery mechanism will focus on getting the payload of the packet to where it needs to go, using the header (and any trailer) information to direct that operation. There are applications, typically in the shadowy world of snoopage, where the entire Ethernet frame, including any headers and trailers, is of interest. And here again, your standard NIC won’t support this.
In a manner of speaking, the average NIC card supports your average high school student, your football players, your cheerleaders. FPGAs, by contrast, support the goth contingent, the drama kids, the band kids, the stoners out by the bleachers, the burners-to-be. The ones you might not take home to mother, that might slip through the cracks if there weren’t a few charitable souls out there making sure everyone has a place to turn. The ones that don’t get noticed, that never get any respect.
Because your garden-variety NICs and packet-processing chips won’t handle the kinds of things AdvancedIO are trying to support, FPGAs must be summoned as the usual go-to solution for these more offbeat (if not exactly charity) applications. Those forecasting the eventual demise of FPGAs due to the cost reductions of more standard vehicles (even as others forecast the increase of FPGA use as dedicated chips become ever more expensive to create) fail to give due respect to these corners of the application world. Leaving opportunity for those who do.