feature article
Subscribe Now

2017 – The IoT Administration Begins

A New Leader for Electronic Design

During the fifty-year history of Moore’s Law, technological progress in electronics has served two distinct masters. While the industry produces an enormous range of technologies, deployed in countless systems and addressing innumerable application domains, there has always been one clear driver, one prototype system, one application that rules them all and bends our collective innovative energies to its will.

First, we had the PC administration. Technological innovation followed and served computing – specifically, personal computers and the first generation of internet connectivity. While the chips, tools, and boards we created were applicable to everything from home stereos to spaceships, the economic and technological rules were written by those who populated the world with ubiquitous connected desktop computing. Computers needed more speed to run consumer-friendly GUI-based operating systems, so semiconductor processes were designed to maximize megahertz, MIPS, and Mbps in whatever way possible. This led to monolithic microprocessors loaded with power-hungry transistors, packed into fan-cooled enclosures with big ‘ol heat sinks. It gave us PCI, Ethernet, and first-generation USB.

The PC administration also brought us unseen legions of packet-switching contraptions, mostly crammed with dozens of the fastest FPGAs Taiwan could fabricate – humming away night and day – directing and delivering each precious packet of primitive web content safely to the waiting eyes of the world’s exponentially growing community of wired citizenry. The bandwidth glut was so voracious that cost, power consumption, form-factor, and just about every other imaginable design constraint took a back seat to the laying of pipe and pavement on the information superhighway.

Then, almost unceremoniously, the PC administration gave over the helm to a new master – Mobility. Practically overnight, our priorities shifted. Now power, cost, and form factor took center stage. Wiggling transistors as fast as physically possible no longer seemed like a good idea. Leakage current became our enemy. We set about using the bounty of transistors Mr. Moore had bequeathed us in a different way – eeking out the most computation per coulomb with clever strategies for clock gating, parallelizing complex tasks, and reducing the impact of complex software systems. One day of battery life became the immovable object. Slipping a device comfortably into a jeans pocket outweighed doubling the Fmax. Fitting the BOM cost into something that could be given away for free with a 2-year service plan sidelined the previous generation’s big-iron Intel/AMD micros, in favor of ARM-architected application cores thriftily booting stripped-down variations of UNIX.

The Mobility administration saw the build-out of the wireless infrastructure, and populating the world with more cell towers took priority over bolstering the backbone of the internet. We wanted skateboarding dog videos, and we wanted them wherever and whenever the mood struck us. The wireless data economy wielded enormous power, and the industry responded with standards, protocols, semiconductor processes, chips, connectors, and PCB technologies that allowed us to pave the planet with pocket-sized quad-core 64-bit processing marvels at prices the average student could afford annually.

Now, however, there’s a new sheriff in town. Mobility is giving way to IoT, and the implications for technology development span the stack from top to bottom. IoT is as different an animal from mobile as mobile was from desktop. IoT encompasses the gamut of challenges from infinitesimally inexpensive edge devices quietly gathering sensor data using tiny trickles of harvested energy – to enormous cloud data centers sucking zetabytes of content through monstrous information pipes – processing, storing and returning it with almost incomprehensible computing power, and trying to get by within the maximum energy budget the local utility can possibly provide.

At the base semiconductor level, the challenge is no longer “Cramming More Components onto Integrated Circuits.” Instead, it’s more subtle – like “cramming more different types of highly-efficient components into smaller, cheaper modules.” Many IoT edge devices need to sip the tiniest rations of power while keeping at least some “always on” monitoring of MEMS and other sensors. This means integration of digital, analog, and even MEMS into low-cost, small-form-factor, ultra-low-power packages. The ability to stitch disparate types of technology together on one silicon substrate or interposer – logic, memory, analog – perhaps even MEMS – is a formidable weapon in the IoT edge war.

Not all IoT edge nodes are monitoring simple inertial sensors, however. Many of our IoT devices need far more formidable senses – like vision. For that, we need impressive computing power coupled with small form factors and power budgets. In this realm (and in many other parts of IoT), the key is heterogeneous distributed processing. Solving the complete problem requires a combination of processors with different architectures at different points in the signal chain. New SoCs combining conventional processors with FPGA-based accelerators can hash through piles of incoming video, distilling the interesting bits into a much smaller data stream that can be passed upstream toward the cloud. In the data center, FPGAs, GPUs, and server processors may divide the workload farther, running neural algorithms that identify persons, places, things, and activities from big-data warehouses before passing their analysis back downstream to other nodes – perhaps once again at the edge – to take some action. 

In fact, one of the most critical concepts in IoT may be heterogeneous distributed processing. While heavy-iron von Neumann machines are fast and flexible, there are few tasks for which they are the optimal solution. Decomposing complex, system-level algorithms into pieces that can run in parallel on application-appropriate optimized hardware accelerators of various types (FPGAs, GPUs, MCUs, real-time processors, low-power application processors, and big-money server processors), putting the appropriate types of workloads onto the correct processing nodes at the correct place in the signal chain (optimizing computation, networking, and storage resources) is a daunting challenge that we have only begun to address in the most primitive fashion.

Solving this heterogeneous distributed computing problem is primarily a tool challenge. The industry will need a new generation of smarter software development tools that can target massive networked applications to arbitrary configurations of available hardware resources (only some of which will be conventional processors). Doing so has the potential to improve the energy efficiency of our computing infrastructure by several orders of magnitude, independent of future semiconductor technology gains from Moore’s Law. This is critical because successful global deployment of IoT to its full potential would quickly overtax both the total available computing power and the total available electric power we have today.

On the hardware side, the art of creating the appropriate system-level architecture for IoT deployments is still a vast unexplored territory. Even in the isolated arena of using FPGAs for data-center acceleration, the smartest companies in the world don’t agree at even the most basic level. Recently, we saw Xilinx win a deployment with Amazon for FPGA clusters to be made available as pooled resources on cloud-based servers, while Intel/Altera pursue a much finer-grained strategy of pairing conventional server processors with FPGAs at the package level. These two architectural approaches are vastly different, and there is strong disagreement among experts about which approach is better (we’ll discuss this more in an upcoming article). 

Also, IoT brings with it the most substantial software challenges we’ve ever seen. Developing single applications that span the entire gamut from tiny edge devices to cloud-based computing and storage resources calls for a new breadth of expertise in software teams, as well as a new level of cooperation between software and hardware sides of the house. 

Not to be left out, the networking infrastructure that we’ve created for desktop and mobile fall short when it comes to IoT. The demands of billions of new nodes, many of which will require small duty cycles and tiny bandwidths present serious challenges for the current YouTube-streaming mobile infrastructure. As we’ve discussed extensively in these pages, new standards and technologies – both wired and wireless – will be required to meet these needs.

So, from hardware to software to networking – IoT is forcing a new era of priorities upon us. Mobile, it was fun working with you – but there’s a new boss in town now. Say hello to the IoT administration! It will be interesting to watch.

Leave a Reply

featured blogs
Apr 19, 2021
Cache coherency is not a new concept. Coherent architectures have existed for many generations of CPU and Interconnect designs. Verifying adherence to coherency rules in SoCs has always been one of... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 19, 2021
Samtec blog readers are used to hearing about high-performance design. However, we see an increase in intertest in power integrity (PI). PI grows more crucial with each design iteration, yet many engineers are just starting to understand PI. That raises an interesting questio...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

featured chalk talk

Silicon Lifecycle Management (SLM)

Sponsored by Synopsys

Wouldn’t it be great if we could keep on analyzing our IC designs once they are in the field? After all, simulation and lab measurements can never tell the whole story of how devices will behave in real-world use. In this episode of Chalk Talk, Amelia Dalton chats with Randy Fish of Synopsys about gaining better insight into IC designs through the use of embedded monitors and sensors, and how we can enable a range of new optimizations throughout the lifecycle of our designs.

Click here for more information about Silicon Lifecycle Management Platform