feature article
Subscribe Now

2017 – The IoT Administration Begins

A New Leader for Electronic Design

During the fifty-year history of Moore’s Law, technological progress in electronics has served two distinct masters. While the industry produces an enormous range of technologies, deployed in countless systems and addressing innumerable application domains, there has always been one clear driver, one prototype system, one application that rules them all and bends our collective innovative energies to its will.

First, we had the PC administration. Technological innovation followed and served computing – specifically, personal computers and the first generation of internet connectivity. While the chips, tools, and boards we created were applicable to everything from home stereos to spaceships, the economic and technological rules were written by those who populated the world with ubiquitous connected desktop computing. Computers needed more speed to run consumer-friendly GUI-based operating systems, so semiconductor processes were designed to maximize megahertz, MIPS, and Mbps in whatever way possible. This led to monolithic microprocessors loaded with power-hungry transistors, packed into fan-cooled enclosures with big ‘ol heat sinks. It gave us PCI, Ethernet, and first-generation USB.

The PC administration also brought us unseen legions of packet-switching contraptions, mostly crammed with dozens of the fastest FPGAs Taiwan could fabricate – humming away night and day – directing and delivering each precious packet of primitive web content safely to the waiting eyes of the world’s exponentially growing community of wired citizenry. The bandwidth glut was so voracious that cost, power consumption, form-factor, and just about every other imaginable design constraint took a back seat to the laying of pipe and pavement on the information superhighway.

Then, almost unceremoniously, the PC administration gave over the helm to a new master – Mobility. Practically overnight, our priorities shifted. Now power, cost, and form factor took center stage. Wiggling transistors as fast as physically possible no longer seemed like a good idea. Leakage current became our enemy. We set about using the bounty of transistors Mr. Moore had bequeathed us in a different way – eeking out the most computation per coulomb with clever strategies for clock gating, parallelizing complex tasks, and reducing the impact of complex software systems. One day of battery life became the immovable object. Slipping a device comfortably into a jeans pocket outweighed doubling the Fmax. Fitting the BOM cost into something that could be given away for free with a 2-year service plan sidelined the previous generation’s big-iron Intel/AMD micros, in favor of ARM-architected application cores thriftily booting stripped-down variations of UNIX.

The Mobility administration saw the build-out of the wireless infrastructure, and populating the world with more cell towers took priority over bolstering the backbone of the internet. We wanted skateboarding dog videos, and we wanted them wherever and whenever the mood struck us. The wireless data economy wielded enormous power, and the industry responded with standards, protocols, semiconductor processes, chips, connectors, and PCB technologies that allowed us to pave the planet with pocket-sized quad-core 64-bit processing marvels at prices the average student could afford annually.

Now, however, there’s a new sheriff in town. Mobility is giving way to IoT, and the implications for technology development span the stack from top to bottom. IoT is as different an animal from mobile as mobile was from desktop. IoT encompasses the gamut of challenges from infinitesimally inexpensive edge devices quietly gathering sensor data using tiny trickles of harvested energy – to enormous cloud data centers sucking zetabytes of content through monstrous information pipes – processing, storing and returning it with almost incomprehensible computing power, and trying to get by within the maximum energy budget the local utility can possibly provide.

At the base semiconductor level, the challenge is no longer “Cramming More Components onto Integrated Circuits.” Instead, it’s more subtle – like “cramming more different types of highly-efficient components into smaller, cheaper modules.” Many IoT edge devices need to sip the tiniest rations of power while keeping at least some “always on” monitoring of MEMS and other sensors. This means integration of digital, analog, and even MEMS into low-cost, small-form-factor, ultra-low-power packages. The ability to stitch disparate types of technology together on one silicon substrate or interposer – logic, memory, analog – perhaps even MEMS – is a formidable weapon in the IoT edge war.

Not all IoT edge nodes are monitoring simple inertial sensors, however. Many of our IoT devices need far more formidable senses – like vision. For that, we need impressive computing power coupled with small form factors and power budgets. In this realm (and in many other parts of IoT), the key is heterogeneous distributed processing. Solving the complete problem requires a combination of processors with different architectures at different points in the signal chain. New SoCs combining conventional processors with FPGA-based accelerators can hash through piles of incoming video, distilling the interesting bits into a much smaller data stream that can be passed upstream toward the cloud. In the data center, FPGAs, GPUs, and server processors may divide the workload farther, running neural algorithms that identify persons, places, things, and activities from big-data warehouses before passing their analysis back downstream to other nodes – perhaps once again at the edge – to take some action. 

In fact, one of the most critical concepts in IoT may be heterogeneous distributed processing. While heavy-iron von Neumann machines are fast and flexible, there are few tasks for which they are the optimal solution. Decomposing complex, system-level algorithms into pieces that can run in parallel on application-appropriate optimized hardware accelerators of various types (FPGAs, GPUs, MCUs, real-time processors, low-power application processors, and big-money server processors), putting the appropriate types of workloads onto the correct processing nodes at the correct place in the signal chain (optimizing computation, networking, and storage resources) is a daunting challenge that we have only begun to address in the most primitive fashion.

Solving this heterogeneous distributed computing problem is primarily a tool challenge. The industry will need a new generation of smarter software development tools that can target massive networked applications to arbitrary configurations of available hardware resources (only some of which will be conventional processors). Doing so has the potential to improve the energy efficiency of our computing infrastructure by several orders of magnitude, independent of future semiconductor technology gains from Moore’s Law. This is critical because successful global deployment of IoT to its full potential would quickly overtax both the total available computing power and the total available electric power we have today.

On the hardware side, the art of creating the appropriate system-level architecture for IoT deployments is still a vast unexplored territory. Even in the isolated arena of using FPGAs for data-center acceleration, the smartest companies in the world don’t agree at even the most basic level. Recently, we saw Xilinx win a deployment with Amazon for FPGA clusters to be made available as pooled resources on cloud-based servers, while Intel/Altera pursue a much finer-grained strategy of pairing conventional server processors with FPGAs at the package level. These two architectural approaches are vastly different, and there is strong disagreement among experts about which approach is better (we’ll discuss this more in an upcoming article). 

Also, IoT brings with it the most substantial software challenges we’ve ever seen. Developing single applications that span the entire gamut from tiny edge devices to cloud-based computing and storage resources calls for a new breadth of expertise in software teams, as well as a new level of cooperation between software and hardware sides of the house. 

Not to be left out, the networking infrastructure that we’ve created for desktop and mobile fall short when it comes to IoT. The demands of billions of new nodes, many of which will require small duty cycles and tiny bandwidths present serious challenges for the current YouTube-streaming mobile infrastructure. As we’ve discussed extensively in these pages, new standards and technologies – both wired and wireless – will be required to meet these needs.

So, from hardware to software to networking – IoT is forcing a new era of priorities upon us. Mobile, it was fun working with you – but there’s a new boss in town now. Say hello to the IoT administration! It will be interesting to watch.

Leave a Reply

featured blogs
Mar 24, 2023
With CadenceCONNECT CFD less than a month away, now is the time to make your travel plans to join us at the Santa Clara Convention Center on 19 April for our biggest CFD event of the year. As a bonus, CadenceCONNECT CFD is co-located with the first day of CadenceLIVE Silicon ...
Mar 23, 2023
Explore AI chip architecture and learn how AI's requirements and applications shape AI optimized hardware design across processors, memory chips, and more. The post Why AI Requires a New Chip Architecture appeared first on New Horizons for Chip Design....
Mar 10, 2023
A proven guide to enable project managers to successfully take over ongoing projects and get the work done!...

featured video

First CXL 2.0 IP Interoperability Demo with Compliance Tests

Sponsored by Synopsys

In this video, Sr. R&D Engineer Rehan Iqbal, will guide you through Synopsys CXL IP passing compliance tests and demonstrating our seamless interoperability with Teladyne LeCroy Z516 Exerciser. This first-of-its-kind interoperability demo is a testament to Synopsys' commitment to delivering reliable IP solutions.

Learn more about Synopsys CXL here

featured chalk talk

Designing with GaN? Ask the Right Questions about Reliability
As demands for high-performance and low-cost power conversion increases, gallium nitride offers several intriguing benefits for next generation power supply design. In this episode of Chalk Talk, Amelia Dalton and Sandeep Bahl from Texas Instruments investigate the what, why and how of gallium nitride power technology. They take a closer look at the component level and in-system reliability for TI’s gallium nitride power solutions and why GaN might just be the perfect solution for your next power supply design.
Oct 4, 2022