feature article
Subscribe Now

Fog Computing Gets a Champion

Group Hopes to Impose Rationality to Widely Distributed IoT Devices

“Great achievements are accomplished in a blessed, warm fog.” – Joseph Conrad

I’ll just go ahead and say it: Fog computing sounds like a stupid idea.

By that I mean it only sounds stupid. It’s actually a great idea. Rational, commonsensical. Maybe even inevitable. But it’s one of those things that won’t happen by accident, so we need a group of dedicated overseers to make sure this all plays out the way it should. If all goes well, we should have good, effective fog computing within a few years.

So, what the %#@ is “fog computing,” anyway? It sounds like some sort of marketing spin on cloud computing, right? Maybe a cheaper version, or some sort of spinoff intended for the San Francisco market?

“It’s like cloud computing, but closer to the ground,” says Chuck Byers, Technology Chair for the OpenFog Consortium. Fog is essentially a cross between IoT (Internet of Things, natch) and cloud computing. It’s the concept that IoT end nodes – sensors, actuators, cameras, and other gizmos – should all have their own internal computing horsepower. In fact, there should be hierarchies – levels of fog, if you like – to distribute workloads and contain data.

These kinds of things can connect to the cloud – in fact, they should – but they shouldn’t just be dumb devices that squirt raw data up to some remote server. Instead, they should all have local intelligence, local storage, and local autonomy. Think distributed network with distributed intelligent devices. I know, right? Simple.

Fog computing (regardless of what you call it) sounds so reasonable and so obvious that it hardly needs its own name, much less its own consortium and its own trade show: the Fog World Congress, held in where else but San Francisco. Isn’t this all just… y’know… good engineering? Do we really need a group to tell us how to make smart IoT devices that work together?

Probably, yes. As people have been saying for at least two thousand years, “There’s many a slip ‘twixt cup and lip.” Simple in concept, maybe, but tricky in practice. The goal of the OpenFog Consortium is that we don’t all shoot ourselves in the collective foot by creating IoT networks that don’t follow good, basic fog-computing practices.

And what practices are those? Glad you asked, because there’s a 170-page document that outlines what you should and shouldn’t be doing. Called the OpenFog Reference Architecture for Fog Computing, the manifesto has risen to the level of official IEEE standard. You can download IEEE-1934 and learn about best practices for partitioning systems, minimizing latency, handling data security, implementing fault-tolerance, handling device management and orchestration, and more.

What IEEE-1934 doesn’t spell out, and what the OpenFog Consortium isn’t interested in promoting, is specific technical solutions to these problems. In other words, they don’t recommend a particular operating system, wireless protocol, interface standard, encryption algorithm, or anything else that has a part number or a circuit diagram associated with it. At this point, the group’s goals are more high-level than that. OpenFog isn’t promoting standards in the electrical sense. They’re more what you’d call guidelines.

Scalability is a big issue to the OpenFog group, and this, too, seems inevitable. Cisco’s John Chambers famously predicted 500 billion Internet-connected devices by 2025, and if even 0.1% of those have some local intelligence, you’re still looking at 500 million smart IoT endpoints in a few years. That’s a lot of semi-intelligent gadgets to set up, configure, and connect to some network or other. A quick back-of-the-napkin calculation suggests that’s going to require a whole lot of engineers tweaking a whole lot of configuration data in a very short amount of time. Either we need to hire trained monkeys to type in IP addresses and encryption keys and upload configuration files, or we need to make (at least some of) those devices self-configuring. Again, OpenFog doesn’t tell us how we must do it, only that we must do it.

But that may change. As its follow-up act, the group has started working on OpenFog Technical Framework, a more detailed specification with “about 90 normative standards requirements so far, eventually heading to several hundred,” according to Byers. When it’s complete, this document will supply the satisfying implementation details and interoperability guidelines to go along with the IEEE-1934’s overarching architecture.

In the meantime, there are fog-compatible products out in the wild. Nebbiolo Technologies has its FogOS operating system, FogSM system-management software, and FogNode x86-based hardware. There’s also EdgeX Foundry, an open-source project under the auspices of the Linux Foundation specifically for fog-computing platforms. The OpenFog Consortium values both, without explicitly promoting either.

It all seems so straightforward. Surely everyone in the industry is bobbing their heads in unison, cheering on the efforts of the OpenFog Consortium? Eh, not so much. As with any movement, there is disagreement over the battle plan. Yes, we need to connect a zillion devices to the Internet and give them all some local intelligence, but how? Plenty of edge-computing stakeholders have good ideas about this, and some are more implacable than others.

There are the inevitable geographic differences, too. Industrial firms in Europe, for example, don’t always agree with those in China, or in North America. As all these groups collide, cooperate, or coalesce, Byers says we’ll see an “elephant dance” as the prevailing standards work themselves out. Maybe it’ll all be based on IEEE-1934 in the end, but then again, maybe not.

Whether it’s the OpenFog Consortium’s solution or not, it’s clear that we need some rational framework for dealing with all the gadgets we’re creating. Either that, or it’s time to start training the monkeys.

Leave a Reply

featured blogs
Oct 25, 2020
https://youtu.be/_xItRYHmGPw Made on my balcony (camera Carey Guo) Monday: The Start of the Arm Era Tuesday: The Gen Arm 2Z Ambassadors Wednesday: CadenceLIVE India: Best Paper Awards Thursday:... [[ Click on the title to access the full blog on the Cadence Community site. ]...
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through-hole products, or a single or double row surface mount with a larger centerline rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and con...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...

featured video

Better PPA with Innovus Mixed Placer Technology – Gigaplace XL

Sponsored by Cadence Design Systems

With the increase of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan with manual methods. Innovus Implementation’s advanced multi-objective placement technology, GigaPlace XL, provides automation to optimize at scale, concurrent placement of macros, and standard cells for multiple objectives like timing, wirelength, congestion, and power. This technology provides an innovative way to address design productivity along with design quality improvements reducing weeks of manual floorplan time down to a few hours.

Click here for more information about Innovus Implementation System

featured paper

An engineer’s guide to autonomous and collaborative industrial robots

Sponsored by Texas Instruments

As robots are becoming more commonplace in factories, it is important that they become more intelligent, autonomous, safer and efficient. All of this is enabled with precise motor control, advanced sensing technologies and processing at the edge, all with robust real-time communication. In our e-book, an engineer’s guide to industrial robots, we take an in-depth look at the key technologies used in various robotic applications.

Click here to download the e-book

Featured Chalk Talk

MCU32 Graphics Overview

Sponsored by Mouser Electronics and Microchip

Graphical interfaces add a whole new dimension to embedded designs. But, designing a full-blown graphics interface is a major challenge for most embedded systems designers. In this episode of Chalk Talk, Amelia Dalton and Kurt Parker from Microchip Technology explain how you can add a modern graphics user interface to your next embedded design without a big learning curve.

Click here for more information about Microchip Technology MPLAB® X Integrated Development Environment