feature article
Subscribe Now

Fog Computing Gets a Champion

Group Hopes to Impose Rationality to Widely Distributed IoT Devices

“Great achievements are accomplished in a blessed, warm fog.” – Joseph Conrad

I’ll just go ahead and say it: Fog computing sounds like a stupid idea.

By that I mean it only sounds stupid. It’s actually a great idea. Rational, commonsensical. Maybe even inevitable. But it’s one of those things that won’t happen by accident, so we need a group of dedicated overseers to make sure this all plays out the way it should. If all goes well, we should have good, effective fog computing within a few years.

So, what the %#@ is “fog computing,” anyway? It sounds like some sort of marketing spin on cloud computing, right? Maybe a cheaper version, or some sort of spinoff intended for the San Francisco market?

“It’s like cloud computing, but closer to the ground,” says Chuck Byers, Technology Chair for the OpenFog Consortium. Fog is essentially a cross between IoT (Internet of Things, natch) and cloud computing. It’s the concept that IoT end nodes – sensors, actuators, cameras, and other gizmos – should all have their own internal computing horsepower. In fact, there should be hierarchies – levels of fog, if you like – to distribute workloads and contain data.

These kinds of things can connect to the cloud – in fact, they should – but they shouldn’t just be dumb devices that squirt raw data up to some remote server. Instead, they should all have local intelligence, local storage, and local autonomy. Think distributed network with distributed intelligent devices. I know, right? Simple.

Fog computing (regardless of what you call it) sounds so reasonable and so obvious that it hardly needs its own name, much less its own consortium and its own trade show: the Fog World Congress, held in where else but San Francisco. Isn’t this all just… y’know… good engineering? Do we really need a group to tell us how to make smart IoT devices that work together?

Probably, yes. As people have been saying for at least two thousand years, “There’s many a slip ‘twixt cup and lip.” Simple in concept, maybe, but tricky in practice. The goal of the OpenFog Consortium is that we don’t all shoot ourselves in the collective foot by creating IoT networks that don’t follow good, basic fog-computing practices.

And what practices are those? Glad you asked, because there’s a 170-page document that outlines what you should and shouldn’t be doing. Called the OpenFog Reference Architecture for Fog Computing, the manifesto has risen to the level of official IEEE standard. You can download IEEE-1934 and learn about best practices for partitioning systems, minimizing latency, handling data security, implementing fault-tolerance, handling device management and orchestration, and more.

What IEEE-1934 doesn’t spell out, and what the OpenFog Consortium isn’t interested in promoting, is specific technical solutions to these problems. In other words, they don’t recommend a particular operating system, wireless protocol, interface standard, encryption algorithm, or anything else that has a part number or a circuit diagram associated with it. At this point, the group’s goals are more high-level than that. OpenFog isn’t promoting standards in the electrical sense. They’re more what you’d call guidelines.

Scalability is a big issue to the OpenFog group, and this, too, seems inevitable. Cisco’s John Chambers famously predicted 500 billion Internet-connected devices by 2025, and if even 0.1% of those have some local intelligence, you’re still looking at 500 million smart IoT endpoints in a few years. That’s a lot of semi-intelligent gadgets to set up, configure, and connect to some network or other. A quick back-of-the-napkin calculation suggests that’s going to require a whole lot of engineers tweaking a whole lot of configuration data in a very short amount of time. Either we need to hire trained monkeys to type in IP addresses and encryption keys and upload configuration files, or we need to make (at least some of) those devices self-configuring. Again, OpenFog doesn’t tell us how we must do it, only that we must do it.

But that may change. As its follow-up act, the group has started working on OpenFog Technical Framework, a more detailed specification with “about 90 normative standards requirements so far, eventually heading to several hundred,” according to Byers. When it’s complete, this document will supply the satisfying implementation details and interoperability guidelines to go along with the IEEE-1934’s overarching architecture.

In the meantime, there are fog-compatible products out in the wild. Nebbiolo Technologies has its FogOS operating system, FogSM system-management software, and FogNode x86-based hardware. There’s also EdgeX Foundry, an open-source project under the auspices of the Linux Foundation specifically for fog-computing platforms. The OpenFog Consortium values both, without explicitly promoting either.

It all seems so straightforward. Surely everyone in the industry is bobbing their heads in unison, cheering on the efforts of the OpenFog Consortium? Eh, not so much. As with any movement, there is disagreement over the battle plan. Yes, we need to connect a zillion devices to the Internet and give them all some local intelligence, but how? Plenty of edge-computing stakeholders have good ideas about this, and some are more implacable than others.

There are the inevitable geographic differences, too. Industrial firms in Europe, for example, don’t always agree with those in China, or in North America. As all these groups collide, cooperate, or coalesce, Byers says we’ll see an “elephant dance” as the prevailing standards work themselves out. Maybe it’ll all be based on IEEE-1934 in the end, but then again, maybe not.

Whether it’s the OpenFog Consortium’s solution or not, it’s clear that we need some rational framework for dealing with all the gadgets we’re creating. Either that, or it’s time to start training the monkeys.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Miniaturization Impact on Automotive Products
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Amelia Dalton and Kirk Ulery from Molex explore the role that miniaturization plays in automotive design innovation. They examine the transformational trends that are leading to smaller and smaller components in automotive designs and how the right connector can make all the difference in your next automotive design.
Sep 25, 2023
25,618 views