feature article
Subscribe Now

Computing at a Crossroads

Re-Defining System-level Engineering

Computing is at a crossroads. For decades, we have surfed the exponential wave of Moore’s Law, tuning and tweaking the various von Neumann architectures, resizing caches, redefining pipelines, debating RISC vs CISC, messing with memory structures, widening words, predicting branches, and generally futzing around until we reached a point where we could claim victory for another node. We have built various schemes for peripherals, processors, memory, and storage to communicate; bolted on accelerators for various purposes, and tested variations on a theme for specialized problems such as signal processing or graphics.

With all of this evolution, refinement, and tuning, the core driver of progress was Moore’s Law. Every two years we were awarded this gift of more transistors switching faster with less power and at lower cost. It’s hard to lose when you’ve got that kind of built-in advantage working for you. And, on top of all that hardware stuff, an enormous ecosystem of software, compilers, operating systems, development environments, and tools evolved.  

Now, however, Moore’s Law is running out of gas – maybe not stopping completely, but losing momentum in a game-changing way. Now, each new node takes closer to three years than two. Each new node costs exponentially more to use than the last. And the bounty of price, performance, and power is dramatically reduced, requiring us to choose only one or two at the expense of the rest. For many applications, the benefits of using the latest process node are dubious or nonexistent. It’s better to sit back and enjoy semiconductor technology from half a decade ago than to push your product to the edge and risk falling off.

At the same time, AI is coming into its own. Neural networks are becoming a viable solution for more and more problems. Engineering talent that was previously deployed to design new systems based on conventional computing has been redirected to focus on data science and deep learning. We are designing systems that span the IoT from edge to cloud and back. Engineering has escaped from the box. We no longer design devices; we create services delivered by heterogeneous systems that distribute complex computing problems across a wide range of architectures optimized for various sub-tasks. (Try saying THAT three times fast.)

New packaging technologies are changing our units of design currency from SoCs to SiPs. With Moore’s Law stagnating, it is often advantageous to manufacture memories, logic, analog, and interfaces on different process technologies. But the advantages of integration in a single package persist. As a result, we are seeing a revisiting of the definition of “device” from predominantly monolithic silicon CMOS chips in a package to complex combinations of diverse chiplets communicating via interposers or other similar schemes.  

Let’s face it – for a long time now, digital design at the system level has mostly been about building faster, smaller, cheaper, more efficient von Neumann computers. Our notion of custom chip design shifted from ASIC – literally “application-specific integrated circuit” – to SoC – “system on a chip” – where the “system” was narrowly defined as a processor with an appropriate set of peripherals. Most custom digital chip design became a task of integrating increasingly complicated sets of IP blocks – processors, peripherals, memories and memory interfaces, and various IO schemes – into a new monolithic IC, often with very little application-specific hardware content.  

We are now at an inflection point where the whole game may change. For one thing, we may be integrating much more often at the package level. And, where the ecosystems for integrating at the board level and at the monolithic chip level are fairly robustly defined, the flow for developing an SiP is considerably less mature. SiP development may involve getting chiplets from various suppliers and integrating them on one of several SiP schemas. Since industry standards are far from mature in this space, most development will be somewhat custom – which means more work for engineering experts in the latest packaging techniques.

Increasing use of various types of accelerators and neural network processors will dramatically complicate hardware design. We won’t just grab the latest appropriate ARM core and synthesize it in to a usable block on a generic-ish SoC. Our applications will often be partitioned across heterogeneous computing machines, with different types of hardware optimized for various parts of the task – sensor fusion on MCUs or FPGAs, pattern matching on neural networks, high-level application management on conventional processors – the list goes on and on. The days of a “system” consisting of a single processor or MCU running a simple software stack are disappearing into the rearview mirror.

All of this discontinuity has serious career implications for engineers. More of the tasks we have become “expert” in are being subsumed into pre-engineered, reusable blocks of various types. System-level engineering has moved another layer up the abstraction tree and pulled many of the more specialized disciplines along with it. If you were the power supply expert at a systems company, you may now be simply selecting the best power module. If you were the RTL guru, you may now be tasked with integrating synthesizable blocks designed by others. If your expertise (and enthusiasm) lock you into a particular role in the food chain, you may find yourself consumed.

But discontinuous change brings opportunity – both for innovation and for career advancement. The benefits of continuing education in engineering have never been stronger. There are numerous new and exciting areas of technology that are starving for experts. Getting out of our comfort zones and applying our problem-solving skills to a rapidly-evolving new technology is likely to reap huge rewards. On the other hand, sitting back and “phoning it in” by doing more variations on the same design you’ve been doing for the last decade or so – not so much. It’s time to take some risks.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Introducing the Next Generation in Electronic Systems Design
In this episode of Chalk Talk, David Wiens from Siemens and Amelia Dalton explore the role that AI, cloud connectivity and security will play for the future of electronic system design and how Siemens is furthering innovation in this area with its integrated suite of design tools.
Dec 3, 2024
12,022 views