feature article
Subscribe Now

Computing at a Crossroads

Re-Defining System-level Engineering

Computing is at a crossroads. For decades, we have surfed the exponential wave of Moore’s Law, tuning and tweaking the various von Neumann architectures, resizing caches, redefining pipelines, debating RISC vs CISC, messing with memory structures, widening words, predicting branches, and generally futzing around until we reached a point where we could claim victory for another node. We have built various schemes for peripherals, processors, memory, and storage to communicate; bolted on accelerators for various purposes, and tested variations on a theme for specialized problems such as signal processing or graphics.

With all of this evolution, refinement, and tuning, the core driver of progress was Moore’s Law. Every two years we were awarded this gift of more transistors switching faster with less power and at lower cost. It’s hard to lose when you’ve got that kind of built-in advantage working for you. And, on top of all that hardware stuff, an enormous ecosystem of software, compilers, operating systems, development environments, and tools evolved.  

Now, however, Moore’s Law is running out of gas – maybe not stopping completely, but losing momentum in a game-changing way. Now, each new node takes closer to three years than two. Each new node costs exponentially more to use than the last. And the bounty of price, performance, and power is dramatically reduced, requiring us to choose only one or two at the expense of the rest. For many applications, the benefits of using the latest process node are dubious or nonexistent. It’s better to sit back and enjoy semiconductor technology from half a decade ago than to push your product to the edge and risk falling off.

At the same time, AI is coming into its own. Neural networks are becoming a viable solution for more and more problems. Engineering talent that was previously deployed to design new systems based on conventional computing has been redirected to focus on data science and deep learning. We are designing systems that span the IoT from edge to cloud and back. Engineering has escaped from the box. We no longer design devices; we create services delivered by heterogeneous systems that distribute complex computing problems across a wide range of architectures optimized for various sub-tasks. (Try saying THAT three times fast.)

New packaging technologies are changing our units of design currency from SoCs to SiPs. With Moore’s Law stagnating, it is often advantageous to manufacture memories, logic, analog, and interfaces on different process technologies. But the advantages of integration in a single package persist. As a result, we are seeing a revisiting of the definition of “device” from predominantly monolithic silicon CMOS chips in a package to complex combinations of diverse chiplets communicating via interposers or other similar schemes.  

Let’s face it – for a long time now, digital design at the system level has mostly been about building faster, smaller, cheaper, more efficient von Neumann computers. Our notion of custom chip design shifted from ASIC – literally “application-specific integrated circuit” – to SoC – “system on a chip” – where the “system” was narrowly defined as a processor with an appropriate set of peripherals. Most custom digital chip design became a task of integrating increasingly complicated sets of IP blocks – processors, peripherals, memories and memory interfaces, and various IO schemes – into a new monolithic IC, often with very little application-specific hardware content.  

We are now at an inflection point where the whole game may change. For one thing, we may be integrating much more often at the package level. And, where the ecosystems for integrating at the board level and at the monolithic chip level are fairly robustly defined, the flow for developing an SiP is considerably less mature. SiP development may involve getting chiplets from various suppliers and integrating them on one of several SiP schemas. Since industry standards are far from mature in this space, most development will be somewhat custom – which means more work for engineering experts in the latest packaging techniques.

Increasing use of various types of accelerators and neural network processors will dramatically complicate hardware design. We won’t just grab the latest appropriate ARM core and synthesize it in to a usable block on a generic-ish SoC. Our applications will often be partitioned across heterogeneous computing machines, with different types of hardware optimized for various parts of the task – sensor fusion on MCUs or FPGAs, pattern matching on neural networks, high-level application management on conventional processors – the list goes on and on. The days of a “system” consisting of a single processor or MCU running a simple software stack are disappearing into the rearview mirror.

All of this discontinuity has serious career implications for engineers. More of the tasks we have become “expert” in are being subsumed into pre-engineered, reusable blocks of various types. System-level engineering has moved another layer up the abstraction tree and pulled many of the more specialized disciplines along with it. If you were the power supply expert at a systems company, you may now be simply selecting the best power module. If you were the RTL guru, you may now be tasked with integrating synthesizable blocks designed by others. If your expertise (and enthusiasm) lock you into a particular role in the food chain, you may find yourself consumed.

But discontinuous change brings opportunity – both for innovation and for career advancement. The benefits of continuing education in engineering have never been stronger. There are numerous new and exciting areas of technology that are starving for experts. Getting out of our comfort zones and applying our problem-solving skills to a rapidly-evolving new technology is likely to reap huge rewards. On the other hand, sitting back and “phoning it in” by doing more variations on the same design you’ve been doing for the last decade or so – not so much. It’s time to take some risks.

Leave a Reply

featured blogs
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through-hole products, or a single or double row surface mount with a larger centerline rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and con...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...
Oct 23, 2020
At 11:10am Korean time this morning, Cadence's Elias Fallon delivered one of the keynotes at ISOCC (International System On Chip Conference). It was titled EDA and Machine Learning: The Next Leap... [[ Click on the title to access the full blog on the Cadence Community ...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

Autonomous vehicles, robotics, augmented and virtual reality all require simultaneous localization and mapping (SLAM) to build a map of the surroundings. Combining SLAM with a neural network engine adds intelligence, allowing the system to identify objects and make decisions. In this demo, Synopsys ARC EV processor’s vision engine (VPU) accelerates KudanSLAM algorithms by up to 40% while running object detection on its CNN engine.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Designing highly efficient, powerful and fast EV charging stations

Sponsored by Texas Instruments

Scaling the necessary power for fast EV charging stations can be challenging. One solution is to use modular power converters stacked in parallel. Learn more in our technical article.

Click here to download the technical article

Featured Chalk Talk

Thermal Bridge Technology

Sponsored by Mouser Electronics and TE Connectivity

Recent innovations can make your airflow cooling more efficient and effective. New thermal bridges can outperform conventional thermal pads in a number of ways. In this episode of Chalk Talk, Amelia Dalton chats with Zach Galbraith of TE Connectivity about the application of thermal bridges in cooling electronic designs.

More information about TE Thermal Bridge Technology