feature article
Subscribe Now

2017 – The IoT Administration Begins

A New Leader for Electronic Design

During the fifty-year history of Moore’s Law, technological progress in electronics has served two distinct masters. While the industry produces an enormous range of technologies, deployed in countless systems and addressing innumerable application domains, there has always been one clear driver, one prototype system, one application that rules them all and bends our collective innovative energies to its will.

First, we had the PC administration. Technological innovation followed and served computing – specifically, personal computers and the first generation of internet connectivity. While the chips, tools, and boards we created were applicable to everything from home stereos to spaceships, the economic and technological rules were written by those who populated the world with ubiquitous connected desktop computing. Computers needed more speed to run consumer-friendly GUI-based operating systems, so semiconductor processes were designed to maximize megahertz, MIPS, and Mbps in whatever way possible. This led to monolithic microprocessors loaded with power-hungry transistors, packed into fan-cooled enclosures with big ‘ol heat sinks. It gave us PCI, Ethernet, and first-generation USB.

The PC administration also brought us unseen legions of packet-switching contraptions, mostly crammed with dozens of the fastest FPGAs Taiwan could fabricate – humming away night and day – directing and delivering each precious packet of primitive web content safely to the waiting eyes of the world’s exponentially growing community of wired citizenry. The bandwidth glut was so voracious that cost, power consumption, form-factor, and just about every other imaginable design constraint took a back seat to the laying of pipe and pavement on the information superhighway.

Then, almost unceremoniously, the PC administration gave over the helm to a new master – Mobility. Practically overnight, our priorities shifted. Now power, cost, and form factor took center stage. Wiggling transistors as fast as physically possible no longer seemed like a good idea. Leakage current became our enemy. We set about using the bounty of transistors Mr. Moore had bequeathed us in a different way – eeking out the most computation per coulomb with clever strategies for clock gating, parallelizing complex tasks, and reducing the impact of complex software systems. One day of battery life became the immovable object. Slipping a device comfortably into a jeans pocket outweighed doubling the Fmax. Fitting the BOM cost into something that could be given away for free with a 2-year service plan sidelined the previous generation’s big-iron Intel/AMD micros, in favor of ARM-architected application cores thriftily booting stripped-down variations of UNIX.

The Mobility administration saw the build-out of the wireless infrastructure, and populating the world with more cell towers took priority over bolstering the backbone of the internet. We wanted skateboarding dog videos, and we wanted them wherever and whenever the mood struck us. The wireless data economy wielded enormous power, and the industry responded with standards, protocols, semiconductor processes, chips, connectors, and PCB technologies that allowed us to pave the planet with pocket-sized quad-core 64-bit processing marvels at prices the average student could afford annually.

Now, however, there’s a new sheriff in town. Mobility is giving way to IoT, and the implications for technology development span the stack from top to bottom. IoT is as different an animal from mobile as mobile was from desktop. IoT encompasses the gamut of challenges from infinitesimally inexpensive edge devices quietly gathering sensor data using tiny trickles of harvested energy – to enormous cloud data centers sucking zetabytes of content through monstrous information pipes – processing, storing and returning it with almost incomprehensible computing power, and trying to get by within the maximum energy budget the local utility can possibly provide.

At the base semiconductor level, the challenge is no longer “Cramming More Components onto Integrated Circuits.” Instead, it’s more subtle – like “cramming more different types of highly-efficient components into smaller, cheaper modules.” Many IoT edge devices need to sip the tiniest rations of power while keeping at least some “always on” monitoring of MEMS and other sensors. This means integration of digital, analog, and even MEMS into low-cost, small-form-factor, ultra-low-power packages. The ability to stitch disparate types of technology together on one silicon substrate or interposer – logic, memory, analog – perhaps even MEMS – is a formidable weapon in the IoT edge war.

Not all IoT edge nodes are monitoring simple inertial sensors, however. Many of our IoT devices need far more formidable senses – like vision. For that, we need impressive computing power coupled with small form factors and power budgets. In this realm (and in many other parts of IoT), the key is heterogeneous distributed processing. Solving the complete problem requires a combination of processors with different architectures at different points in the signal chain. New SoCs combining conventional processors with FPGA-based accelerators can hash through piles of incoming video, distilling the interesting bits into a much smaller data stream that can be passed upstream toward the cloud. In the data center, FPGAs, GPUs, and server processors may divide the workload farther, running neural algorithms that identify persons, places, things, and activities from big-data warehouses before passing their analysis back downstream to other nodes – perhaps once again at the edge – to take some action. 

In fact, one of the most critical concepts in IoT may be heterogeneous distributed processing. While heavy-iron von Neumann machines are fast and flexible, there are few tasks for which they are the optimal solution. Decomposing complex, system-level algorithms into pieces that can run in parallel on application-appropriate optimized hardware accelerators of various types (FPGAs, GPUs, MCUs, real-time processors, low-power application processors, and big-money server processors), putting the appropriate types of workloads onto the correct processing nodes at the correct place in the signal chain (optimizing computation, networking, and storage resources) is a daunting challenge that we have only begun to address in the most primitive fashion.

Solving this heterogeneous distributed computing problem is primarily a tool challenge. The industry will need a new generation of smarter software development tools that can target massive networked applications to arbitrary configurations of available hardware resources (only some of which will be conventional processors). Doing so has the potential to improve the energy efficiency of our computing infrastructure by several orders of magnitude, independent of future semiconductor technology gains from Moore’s Law. This is critical because successful global deployment of IoT to its full potential would quickly overtax both the total available computing power and the total available electric power we have today.

On the hardware side, the art of creating the appropriate system-level architecture for IoT deployments is still a vast unexplored territory. Even in the isolated arena of using FPGAs for data-center acceleration, the smartest companies in the world don’t agree at even the most basic level. Recently, we saw Xilinx win a deployment with Amazon for FPGA clusters to be made available as pooled resources on cloud-based servers, while Intel/Altera pursue a much finer-grained strategy of pairing conventional server processors with FPGAs at the package level. These two architectural approaches are vastly different, and there is strong disagreement among experts about which approach is better (we’ll discuss this more in an upcoming article). 

Also, IoT brings with it the most substantial software challenges we’ve ever seen. Developing single applications that span the entire gamut from tiny edge devices to cloud-based computing and storage resources calls for a new breadth of expertise in software teams, as well as a new level of cooperation between software and hardware sides of the house. 

Not to be left out, the networking infrastructure that we’ve created for desktop and mobile fall short when it comes to IoT. The demands of billions of new nodes, many of which will require small duty cycles and tiny bandwidths present serious challenges for the current YouTube-streaming mobile infrastructure. As we’ve discussed extensively in these pages, new standards and technologies – both wired and wireless – will be required to meet these needs.

So, from hardware to software to networking – IoT is forcing a new era of priorities upon us. Mobile, it was fun working with you – but there’s a new boss in town now. Say hello to the IoT administration! It will be interesting to watch.

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Maximizing High Power Density and Efficiency in EV-Charging Applications
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Amelia Dalton and Daniel Dalpiaz from Infineon talk about trends in the greater electrical vehicle charging landscape, typical block diagram components, and tradeoffs between discrete devices versus power modules. They also discuss choices between IGBT’s and Silicon Carbide, the advantages of advanced packaging techniques in both power discrete and power module solutions, and how reliability is increasingly important due to demands for more charging cycles per day.
Dec 18, 2023
13,901 views