feature article
Subscribe Now

Fog Computing Gets a Champion

Group Hopes to Impose Rationality to Widely Distributed IoT Devices

“Great achievements are accomplished in a blessed, warm fog.” – Joseph Conrad

I’ll just go ahead and say it: Fog computing sounds like a stupid idea.

By that I mean it only sounds stupid. It’s actually a great idea. Rational, commonsensical. Maybe even inevitable. But it’s one of those things that won’t happen by accident, so we need a group of dedicated overseers to make sure this all plays out the way it should. If all goes well, we should have good, effective fog computing within a few years.

So, what the %#@ is “fog computing,” anyway? It sounds like some sort of marketing spin on cloud computing, right? Maybe a cheaper version, or some sort of spinoff intended for the San Francisco market?

“It’s like cloud computing, but closer to the ground,” says Chuck Byers, Technology Chair for the OpenFog Consortium. Fog is essentially a cross between IoT (Internet of Things, natch) and cloud computing. It’s the concept that IoT end nodes – sensors, actuators, cameras, and other gizmos – should all have their own internal computing horsepower. In fact, there should be hierarchies – levels of fog, if you like – to distribute workloads and contain data.

These kinds of things can connect to the cloud – in fact, they should – but they shouldn’t just be dumb devices that squirt raw data up to some remote server. Instead, they should all have local intelligence, local storage, and local autonomy. Think distributed network with distributed intelligent devices. I know, right? Simple.

Fog computing (regardless of what you call it) sounds so reasonable and so obvious that it hardly needs its own name, much less its own consortium and its own trade show: the Fog World Congress, held in where else but San Francisco. Isn’t this all just… y’know… good engineering? Do we really need a group to tell us how to make smart IoT devices that work together?

Probably, yes. As people have been saying for at least two thousand years, “There’s many a slip ‘twixt cup and lip.” Simple in concept, maybe, but tricky in practice. The goal of the OpenFog Consortium is that we don’t all shoot ourselves in the collective foot by creating IoT networks that don’t follow good, basic fog-computing practices.

And what practices are those? Glad you asked, because there’s a 170-page document that outlines what you should and shouldn’t be doing. Called the OpenFog Reference Architecture for Fog Computing, the manifesto has risen to the level of official IEEE standard. You can download IEEE-1934 and learn about best practices for partitioning systems, minimizing latency, handling data security, implementing fault-tolerance, handling device management and orchestration, and more.

What IEEE-1934 doesn’t spell out, and what the OpenFog Consortium isn’t interested in promoting, is specific technical solutions to these problems. In other words, they don’t recommend a particular operating system, wireless protocol, interface standard, encryption algorithm, or anything else that has a part number or a circuit diagram associated with it. At this point, the group’s goals are more high-level than that. OpenFog isn’t promoting standards in the electrical sense. They’re more what you’d call guidelines.

Scalability is a big issue to the OpenFog group, and this, too, seems inevitable. Cisco’s John Chambers famously predicted 500 billion Internet-connected devices by 2025, and if even 0.1% of those have some local intelligence, you’re still looking at 500 million smart IoT endpoints in a few years. That’s a lot of semi-intelligent gadgets to set up, configure, and connect to some network or other. A quick back-of-the-napkin calculation suggests that’s going to require a whole lot of engineers tweaking a whole lot of configuration data in a very short amount of time. Either we need to hire trained monkeys to type in IP addresses and encryption keys and upload configuration files, or we need to make (at least some of) those devices self-configuring. Again, OpenFog doesn’t tell us how we must do it, only that we must do it.

But that may change. As its follow-up act, the group has started working on OpenFog Technical Framework, a more detailed specification with “about 90 normative standards requirements so far, eventually heading to several hundred,” according to Byers. When it’s complete, this document will supply the satisfying implementation details and interoperability guidelines to go along with the IEEE-1934’s overarching architecture.

In the meantime, there are fog-compatible products out in the wild. Nebbiolo Technologies has its FogOS operating system, FogSM system-management software, and FogNode x86-based hardware. There’s also EdgeX Foundry, an open-source project under the auspices of the Linux Foundation specifically for fog-computing platforms. The OpenFog Consortium values both, without explicitly promoting either.

It all seems so straightforward. Surely everyone in the industry is bobbing their heads in unison, cheering on the efforts of the OpenFog Consortium? Eh, not so much. As with any movement, there is disagreement over the battle plan. Yes, we need to connect a zillion devices to the Internet and give them all some local intelligence, but how? Plenty of edge-computing stakeholders have good ideas about this, and some are more implacable than others.

There are the inevitable geographic differences, too. Industrial firms in Europe, for example, don’t always agree with those in China, or in North America. As all these groups collide, cooperate, or coalesce, Byers says we’ll see an “elephant dance” as the prevailing standards work themselves out. Maybe it’ll all be based on IEEE-1934 in the end, but then again, maybe not.

Whether it’s the OpenFog Consortium’s solution or not, it’s clear that we need some rational framework for dealing with all the gadgets we’re creating. Either that, or it’s time to start training the monkeys.

Leave a Reply

featured blogs
Oct 16, 2018
  IC Insights has just published a report in its September Update to The 2018 McClean Report, and one figure (reproduced below) puts yet another nail into the coffin for poor old Moore'€™s Law. Now please take care. There'€™s a vertical line between the 200mm wafers ...
Oct 15, 2018
The talk of the town in the DRAM market (well, apart from its growth in the last couple of years) is DDR5. You might assume from the talk that JEDEC has finalized the standard, but it is actually technically still in development. I believe that the final standard is still exp...
Oct 12, 2018
In September, we spent some time focusing on our checkout process, continued to improve our industry standards experience, and released a few new content pages. We also rolled out the first version of our new Samtec Cares website. More on these and the rest of the major updat...
Oct 8, 2018
Even as everything related to electronics shrinks in size, a few small areas that we have been able to ignore so far remain. Along with the physical reduction, we also encounter reductions in our......