feature article
Subscribe Now

2017 – The IoT Administration Begins

A New Leader for Electronic Design

During the fifty-year history of Moore’s Law, technological progress in electronics has served two distinct masters. While the industry produces an enormous range of technologies, deployed in countless systems and addressing innumerable application domains, there has always been one clear driver, one prototype system, one application that rules them all and bends our collective innovative energies to its will.

First, we had the PC administration. Technological innovation followed and served computing – specifically, personal computers and the first generation of internet connectivity. While the chips, tools, and boards we created were applicable to everything from home stereos to spaceships, the economic and technological rules were written by those who populated the world with ubiquitous connected desktop computing. Computers needed more speed to run consumer-friendly GUI-based operating systems, so semiconductor processes were designed to maximize megahertz, MIPS, and Mbps in whatever way possible. This led to monolithic microprocessors loaded with power-hungry transistors, packed into fan-cooled enclosures with big ‘ol heat sinks. It gave us PCI, Ethernet, and first-generation USB.

The PC administration also brought us unseen legions of packet-switching contraptions, mostly crammed with dozens of the fastest FPGAs Taiwan could fabricate – humming away night and day – directing and delivering each precious packet of primitive web content safely to the waiting eyes of the world’s exponentially growing community of wired citizenry. The bandwidth glut was so voracious that cost, power consumption, form-factor, and just about every other imaginable design constraint took a back seat to the laying of pipe and pavement on the information superhighway.

Then, almost unceremoniously, the PC administration gave over the helm to a new master – Mobility. Practically overnight, our priorities shifted. Now power, cost, and form factor took center stage. Wiggling transistors as fast as physically possible no longer seemed like a good idea. Leakage current became our enemy. We set about using the bounty of transistors Mr. Moore had bequeathed us in a different way – eeking out the most computation per coulomb with clever strategies for clock gating, parallelizing complex tasks, and reducing the impact of complex software systems. One day of battery life became the immovable object. Slipping a device comfortably into a jeans pocket outweighed doubling the Fmax. Fitting the BOM cost into something that could be given away for free with a 2-year service plan sidelined the previous generation’s big-iron Intel/AMD micros, in favor of ARM-architected application cores thriftily booting stripped-down variations of UNIX.

The Mobility administration saw the build-out of the wireless infrastructure, and populating the world with more cell towers took priority over bolstering the backbone of the internet. We wanted skateboarding dog videos, and we wanted them wherever and whenever the mood struck us. The wireless data economy wielded enormous power, and the industry responded with standards, protocols, semiconductor processes, chips, connectors, and PCB technologies that allowed us to pave the planet with pocket-sized quad-core 64-bit processing marvels at prices the average student could afford annually.

Now, however, there’s a new sheriff in town. Mobility is giving way to IoT, and the implications for technology development span the stack from top to bottom. IoT is as different an animal from mobile as mobile was from desktop. IoT encompasses the gamut of challenges from infinitesimally inexpensive edge devices quietly gathering sensor data using tiny trickles of harvested energy – to enormous cloud data centers sucking zetabytes of content through monstrous information pipes – processing, storing and returning it with almost incomprehensible computing power, and trying to get by within the maximum energy budget the local utility can possibly provide.

At the base semiconductor level, the challenge is no longer “Cramming More Components onto Integrated Circuits.” Instead, it’s more subtle – like “cramming more different types of highly-efficient components into smaller, cheaper modules.” Many IoT edge devices need to sip the tiniest rations of power while keeping at least some “always on” monitoring of MEMS and other sensors. This means integration of digital, analog, and even MEMS into low-cost, small-form-factor, ultra-low-power packages. The ability to stitch disparate types of technology together on one silicon substrate or interposer – logic, memory, analog – perhaps even MEMS – is a formidable weapon in the IoT edge war.

Not all IoT edge nodes are monitoring simple inertial sensors, however. Many of our IoT devices need far more formidable senses – like vision. For that, we need impressive computing power coupled with small form factors and power budgets. In this realm (and in many other parts of IoT), the key is heterogeneous distributed processing. Solving the complete problem requires a combination of processors with different architectures at different points in the signal chain. New SoCs combining conventional processors with FPGA-based accelerators can hash through piles of incoming video, distilling the interesting bits into a much smaller data stream that can be passed upstream toward the cloud. In the data center, FPGAs, GPUs, and server processors may divide the workload farther, running neural algorithms that identify persons, places, things, and activities from big-data warehouses before passing their analysis back downstream to other nodes – perhaps once again at the edge – to take some action. 

In fact, one of the most critical concepts in IoT may be heterogeneous distributed processing. While heavy-iron von Neumann machines are fast and flexible, there are few tasks for which they are the optimal solution. Decomposing complex, system-level algorithms into pieces that can run in parallel on application-appropriate optimized hardware accelerators of various types (FPGAs, GPUs, MCUs, real-time processors, low-power application processors, and big-money server processors), putting the appropriate types of workloads onto the correct processing nodes at the correct place in the signal chain (optimizing computation, networking, and storage resources) is a daunting challenge that we have only begun to address in the most primitive fashion.

Solving this heterogeneous distributed computing problem is primarily a tool challenge. The industry will need a new generation of smarter software development tools that can target massive networked applications to arbitrary configurations of available hardware resources (only some of which will be conventional processors). Doing so has the potential to improve the energy efficiency of our computing infrastructure by several orders of magnitude, independent of future semiconductor technology gains from Moore’s Law. This is critical because successful global deployment of IoT to its full potential would quickly overtax both the total available computing power and the total available electric power we have today.

On the hardware side, the art of creating the appropriate system-level architecture for IoT deployments is still a vast unexplored territory. Even in the isolated arena of using FPGAs for data-center acceleration, the smartest companies in the world don’t agree at even the most basic level. Recently, we saw Xilinx win a deployment with Amazon for FPGA clusters to be made available as pooled resources on cloud-based servers, while Intel/Altera pursue a much finer-grained strategy of pairing conventional server processors with FPGAs at the package level. These two architectural approaches are vastly different, and there is strong disagreement among experts about which approach is better (we’ll discuss this more in an upcoming article). 

Also, IoT brings with it the most substantial software challenges we’ve ever seen. Developing single applications that span the entire gamut from tiny edge devices to cloud-based computing and storage resources calls for a new breadth of expertise in software teams, as well as a new level of cooperation between software and hardware sides of the house. 

Not to be left out, the networking infrastructure that we’ve created for desktop and mobile fall short when it comes to IoT. The demands of billions of new nodes, many of which will require small duty cycles and tiny bandwidths present serious challenges for the current YouTube-streaming mobile infrastructure. As we’ve discussed extensively in these pages, new standards and technologies – both wired and wireless – will be required to meet these needs.

So, from hardware to software to networking – IoT is forcing a new era of priorities upon us. Mobile, it was fun working with you – but there’s a new boss in town now. Say hello to the IoT administration! It will be interesting to watch.

Leave a Reply

featured blogs
Dec 4, 2023
The OrCAD X and Allegro X 23.1 release comes with a brand-new content delivery application called Cadence Doc Assistant, shortened to Doc Assistant, the next-gen app for content searching, navigation, and presentation. Doc Assistant, with its simplified content classification...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

Peltier Modules
Do you need precise temperature control? Does your application need to be cooled below ambient temperature? If you answered yes to either of these questions, a peltier module may be the best solution for you. In this episode of Chalk Talk, Amelia Dalton chats with Rex Hallock from CUI Devices about the limitations and unique benefits of peltier modules, how CUI Devices’ arcTEC™ structure can make a big difference when it comes to thermal stress and fatigue of peltier modules, and how you can get started using a peltier module in your next design.
Jan 3, 2023