“Great achievements are accomplished in a blessed, warm fog.” – Joseph Conrad
I’ll just go ahead and say it: Fog computing sounds like a stupid idea.
By that I mean it only sounds stupid. It’s actually a great idea. Rational, commonsensical. Maybe even inevitable. But it’s one of those things that won’t happen by accident, so we need a group of dedicated overseers to make sure this all plays out the way it should. If all goes well, we should have good, effective fog computing within a few years.
So, what the %#@ is “fog computing,” anyway? It sounds like some sort of marketing spin on cloud computing, right? Maybe a cheaper version, or some sort of spinoff intended for the San Francisco market?
“It’s like cloud computing, but closer to the ground,” says Chuck Byers, Technology Chair for the OpenFog Consortium. Fog is essentially a cross between IoT (Internet of Things, natch) and cloud computing. It’s the concept that IoT end nodes – sensors, actuators, cameras, and other gizmos – should all have their own internal computing horsepower. In fact, there should be hierarchies – levels of fog, if you like – to distribute workloads and contain data.
These kinds of things can connect to the cloud – in fact, they should – but they shouldn’t just be dumb devices that squirt raw data up to some remote server. Instead, they should all have local intelligence, local storage, and local autonomy. Think distributed network with distributed intelligent devices. I know, right? Simple.
Fog computing (regardless of what you call it) sounds so reasonable and so obvious that it hardly needs its own name, much less its own consortium and its own trade show: the Fog World Congress, held in where else but San Francisco. Isn’t this all just… y’know… good engineering? Do we really need a group to tell us how to make smart IoT devices that work together?
Probably, yes. As people have been saying for at least two thousand years, “There’s many a slip ‘twixt cup and lip.” Simple in concept, maybe, but tricky in practice. The goal of the OpenFog Consortium is that we don’t all shoot ourselves in the collective foot by creating IoT networks that don’t follow good, basic fog-computing practices.
And what practices are those? Glad you asked, because there’s a 170-page document that outlines what you should and shouldn’t be doing. Called the OpenFog Reference Architecture for Fog Computing, the manifesto has risen to the level of official IEEE standard. You can download IEEE-1934 and learn about best practices for partitioning systems, minimizing latency, handling data security, implementing fault-tolerance, handling device management and orchestration, and more.
What IEEE-1934 doesn’t spell out, and what the OpenFog Consortium isn’t interested in promoting, is specific technical solutions to these problems. In other words, they don’t recommend a particular operating system, wireless protocol, interface standard, encryption algorithm, or anything else that has a part number or a circuit diagram associated with it. At this point, the group’s goals are more high-level than that. OpenFog isn’t promoting standards in the electrical sense. They’re more what you’d call guidelines.
Scalability is a big issue to the OpenFog group, and this, too, seems inevitable. Cisco’s John Chambers famously predicted 500 billion Internet-connected devices by 2025, and if even 0.1% of those have some local intelligence, you’re still looking at 500 million smart IoT endpoints in a few years. That’s a lot of semi-intelligent gadgets to set up, configure, and connect to some network or other. A quick back-of-the-napkin calculation suggests that’s going to require a whole lot of engineers tweaking a whole lot of configuration data in a very short amount of time. Either we need to hire trained monkeys to type in IP addresses and encryption keys and upload configuration files, or we need to make (at least some of) those devices self-configuring. Again, OpenFog doesn’t tell us how we must do it, only that we must do it.
But that may change. As its follow-up act, the group has started working on OpenFog Technical Framework, a more detailed specification with “about 90 normative standards requirements so far, eventually heading to several hundred,” according to Byers. When it’s complete, this document will supply the satisfying implementation details and interoperability guidelines to go along with the IEEE-1934’s overarching architecture.
In the meantime, there are fog-compatible products out in the wild. Nebbiolo Technologies has its FogOS operating system, FogSM system-management software, and FogNode x86-based hardware. There’s also EdgeX Foundry, an open-source project under the auspices of the Linux Foundation specifically for fog-computing platforms. The OpenFog Consortium values both, without explicitly promoting either.
It all seems so straightforward. Surely everyone in the industry is bobbing their heads in unison, cheering on the efforts of the OpenFog Consortium? Eh, not so much. As with any movement, there is disagreement over the battle plan. Yes, we need to connect a zillion devices to the Internet and give them all some local intelligence, but how? Plenty of edge-computing stakeholders have good ideas about this, and some are more implacable than others.
There are the inevitable geographic differences, too. Industrial firms in Europe, for example, don’t always agree with those in China, or in North America. As all these groups collide, cooperate, or coalesce, Byers says we’ll see an “elephant dance” as the prevailing standards work themselves out. Maybe it’ll all be based on IEEE-1934 in the end, but then again, maybe not.
Whether it’s the OpenFog Consortium’s solution or not, it’s clear that we need some rational framework for dealing with all the gadgets we’re creating. Either that, or it’s time to start training the monkeys.