feature article
Subscribe Now

Computing at a Crossroads

Re-Defining System-level Engineering

Computing is at a crossroads. For decades, we have surfed the exponential wave of Moore’s Law, tuning and tweaking the various von Neumann architectures, resizing caches, redefining pipelines, debating RISC vs CISC, messing with memory structures, widening words, predicting branches, and generally futzing around until we reached a point where we could claim victory for another node. We have built various schemes for peripherals, processors, memory, and storage to communicate; bolted on accelerators for various purposes, and tested variations on a theme for specialized problems such as signal processing or graphics.

With all of this evolution, refinement, and tuning, the core driver of progress was Moore’s Law. Every two years we were awarded this gift of more transistors switching faster with less power and at lower cost. It’s hard to lose when you’ve got that kind of built-in advantage working for you. And, on top of all that hardware stuff, an enormous ecosystem of software, compilers, operating systems, development environments, and tools evolved.  

Now, however, Moore’s Law is running out of gas – maybe not stopping completely, but losing momentum in a game-changing way. Now, each new node takes closer to three years than two. Each new node costs exponentially more to use than the last. And the bounty of price, performance, and power is dramatically reduced, requiring us to choose only one or two at the expense of the rest. For many applications, the benefits of using the latest process node are dubious or nonexistent. It’s better to sit back and enjoy semiconductor technology from half a decade ago than to push your product to the edge and risk falling off.

At the same time, AI is coming into its own. Neural networks are becoming a viable solution for more and more problems. Engineering talent that was previously deployed to design new systems based on conventional computing has been redirected to focus on data science and deep learning. We are designing systems that span the IoT from edge to cloud and back. Engineering has escaped from the box. We no longer design devices; we create services delivered by heterogeneous systems that distribute complex computing problems across a wide range of architectures optimized for various sub-tasks. (Try saying THAT three times fast.)

New packaging technologies are changing our units of design currency from SoCs to SiPs. With Moore’s Law stagnating, it is often advantageous to manufacture memories, logic, analog, and interfaces on different process technologies. But the advantages of integration in a single package persist. As a result, we are seeing a revisiting of the definition of “device” from predominantly monolithic silicon CMOS chips in a package to complex combinations of diverse chiplets communicating via interposers or other similar schemes.  

Let’s face it – for a long time now, digital design at the system level has mostly been about building faster, smaller, cheaper, more efficient von Neumann computers. Our notion of custom chip design shifted from ASIC – literally “application-specific integrated circuit” – to SoC – “system on a chip” – where the “system” was narrowly defined as a processor with an appropriate set of peripherals. Most custom digital chip design became a task of integrating increasingly complicated sets of IP blocks – processors, peripherals, memories and memory interfaces, and various IO schemes – into a new monolithic IC, often with very little application-specific hardware content.  

We are now at an inflection point where the whole game may change. For one thing, we may be integrating much more often at the package level. And, where the ecosystems for integrating at the board level and at the monolithic chip level are fairly robustly defined, the flow for developing an SiP is considerably less mature. SiP development may involve getting chiplets from various suppliers and integrating them on one of several SiP schemas. Since industry standards are far from mature in this space, most development will be somewhat custom – which means more work for engineering experts in the latest packaging techniques.

Increasing use of various types of accelerators and neural network processors will dramatically complicate hardware design. We won’t just grab the latest appropriate ARM core and synthesize it in to a usable block on a generic-ish SoC. Our applications will often be partitioned across heterogeneous computing machines, with different types of hardware optimized for various parts of the task – sensor fusion on MCUs or FPGAs, pattern matching on neural networks, high-level application management on conventional processors – the list goes on and on. The days of a “system” consisting of a single processor or MCU running a simple software stack are disappearing into the rearview mirror.

All of this discontinuity has serious career implications for engineers. More of the tasks we have become “expert” in are being subsumed into pre-engineered, reusable blocks of various types. System-level engineering has moved another layer up the abstraction tree and pulled many of the more specialized disciplines along with it. If you were the power supply expert at a systems company, you may now be simply selecting the best power module. If you were the RTL guru, you may now be tasked with integrating synthesizable blocks designed by others. If your expertise (and enthusiasm) lock you into a particular role in the food chain, you may find yourself consumed.

But discontinuous change brings opportunity – both for innovation and for career advancement. The benefits of continuing education in engineering have never been stronger. There are numerous new and exciting areas of technology that are starving for experts. Getting out of our comfort zones and applying our problem-solving skills to a rapidly-evolving new technology is likely to reap huge rewards. On the other hand, sitting back and “phoning it in” by doing more variations on the same design you’ve been doing for the last decade or so – not so much. It’s time to take some risks.

Leave a Reply

featured blogs
Dec 5, 2023
Introduction PCIe (Peripheral Component Interconnect Express) is a high-speed serial interconnect that is widely used in consumer and server applications. Over generations, PCIe has undergone diversified changes, spread across transaction, data link and physical layers. The l...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Spectral and Color Sensors
Sponsored by Mouser Electronics and ams OSRAM
There has been quite a bit of advancement in the world of spectrometers of the last several years. In this episode of Chalk Talk, Amelia Dalton and Jim Archibald from ams OSRAM investigate how multispectral sensing solutions are driving innovation in a variety of different fields. They also explore the functions involved with multispectral sensing, the details of ams OSRAM’s AS7343 spectral sensor, and why smoke detection is a great application for this kind of multispectral sensing.
Mar 6, 2023
33,017 views