feature article
Subscribe Now

Digital Design Reinvented

AI Rewrites the Rules for Computing

It’s time to come clean. Let’s face it, digital design dudes and dudettes, we’ve been phoning it in.

It’s true. Those of us doing digital design have spent the last several decades designing one thing: the computer. Sure, we can do 4-bit, 8-bit, 16-bit, 32-bit, 64-bit, big-endian, little endian, RISC, CISC, memory-managed, pipelined, single- or multi-core… The variations and subtleties go on and on. But, at the end of the day, one truth emerges: we long ago passed responsibility for the creation of actual, useful products over to the software engineers.

It’s pretty convenient, drawing this nice protective box around ourselves. We just make the computers. Everybody else is responsible for what they do.

This is not true of analog design, for example. When analog electronic engineers designed a 1970s car stereo, they created a finished product. From the volume and tuning knobs to the antenna to the receiver section, filters, audio amplifiers – analog design draws from a large repertoire of skills and techniques, but each new system has its own DNA. Analog design has always been an art – where each new system could embody new techniques, new compromises, and a creative clean slate. With digital design, we just build whatever computer the software folks need and call it a day. If something is wrong in the final product, the problem is obviously the software. Those people should learn to do “real engineering” – right? When you live in an armored silo, it seems safe to throw stones. 

Programmable logic represents perhaps the pinnacle of this effect. With programmable logic, even hardware design becomes software. Sure, we use different languages, but hardware description languages are still software, synthesizers are compilers, and the wires and transistors in an FPGA are just what came from the factory. They don’t get re-designed for each new application. Got a problem with your FPGA design? Clearly it’s the code. 

The old adage says that when your only tool is a von Neumann machine, every problem looks like a nail – or something like that. But the digital design world is about to change, and the spoiler is neural networks. Artificial intelligence (AI) is not just another application. It is a new way of solving problems. The convergence of big data techniques and deep neural networks have created a new ecosystem for computation that virtually bypasses the entire existing von Neumann infrastructure. Sure, the first applications implemented AI as software on top of existing von Neumann-based computing infrastructure, but for the real applications of AI – from autonomous vehicles to robotics to an almost unimaginable variety of future applications – a completely new computing machine is required.

Today, GPUs, FPGAs, and a few purpose-built architectures are racing to grab real estate in this new computer. The battlefield is bifurcated, because the “training” phase of AI has very different computation requirements from the “inferencing” phase. In training, massive training data sets are analyzed. The training task is well-suited for data center situations, and it generally requires gobs of floating-point performance. GPUs are making strides in this area, as are FPGAs.  

The real payoff, though, is the inferencing engines. Inferencing (at this point in AI evolution, at least) requires very high performance in low-precision fixed-point computing, which is a different animal entirely. FPGAs seem to be leaders in the inferencing world at this point because of their fixed-point prowess. Inferencing is potentially the more lucrative socket to win, because training happens once at the beginning, but every system deployed in the field has to do inferencing. Consider autonomous driving – the socket is in the car, not in the data center at the automotive engineering company.

Inferencing requires a new kind of computer. In many cases, the latency, safety, and/or security requirements will preclude inferencing in the cloud. An autonomous car cannot afford to send a bunch of data up to a data center for analysis before deciding whether or not to apply the brakes or swerve. That decision has to be made locally, quickly, and reliably. Of course, it also has to be inexpensive and consume very little power.

This new AI inferencing computer is not simply some faster variant of a von Neumann machine. It is a different animal entirely. In order to meet the above requirements, it will have to rely very little on software and far more on native-hardware implementations of inferencing algorithms. And all indications are that the hardware architecture will be very application specific. There is not likely to be a one-size-fits-all AI inferencing machine any time in the near future.

Consider that many AI-driven applications will rely on hierarchically-nested structures of neural network machines. One AI system may be busy recognizing people, objects, and the environment. A completely different AI machine may be taking those results and analyzing them in terms of some context or task at hand – staying on the road and not hitting any pedestrians, for example. Still another may be processing the results of that in the context of some larger task such as navigation or minimizing traffic congestion. These nested tasks, each performed by specialized computers, do not lend themselves at this point to a standardized hardware architecture. 

To put it in simplest terms, digital designers need to get back to work. For the first time in decades, we need to get back closer to the application. We need to create new machines that are tailored to a specific task, rather than to a type of software. We need to take back some responsibility for the solution, rather than simply cranking out a new variant of our one component.

While FPGAs and GPUs are fighting for prominence in this new AI-driven world, nobody can seriously believe that either is a long-term solution. Someone will create an architecture that directly addresses the needs of AI, and that new architecture will need to demonstrate significant advantages over GPUs and FPGAs in order to thrive, but it almost certainly will happen.

One thing that is clear, though, is that software will not have the same role in the AI world as it did in the past. Software engineers will no longer be creating the algorithms that implement the solution. The intelligence of any application will be generated based on enormous training data sets, and the efficacy and differentiation will come from the quality of training, rather than the thoughtful application of software design techniques. In fact, because hardware will become more application specific, and software less so, we may see a virtual swapping of roles between hardware and software engineers in the success of new systems.

In most systems design teams today, the ratio of software to hardware engineers has skewed dramatically toward software – often in a ratio of 10:1 or more. And, in most system design projects, it is the software that is still the gating factor in schedules, quality, and performance. With AI, though, that ratio may well need to shift. When system design companies can no longer rely on buying a semi-generic computer and customizing it with software alone, there may well be an increase in demand for hardware engineers with the skills required to create custom computing architectures for specific AI applications. That is likely to turn today’s industry on its head. It will be interesting to watch.

5 thoughts on “Digital Design Reinvented”

  1. I don’t see any mention of data scientists in this blog. Surely the change is not going to be from s/w to h/w but from coding to modelling and training (In the sense of modelling the problem at hand and designing and training the network), both of which require a new skill set different to s/w or hardware engineering. I suspect also that the design of hardware ML engines is going to be done by a few specialist companies. Just now, there is a lot of “heat” out there in this field. Expect some cooling and consolidation as things mature….

  2. right, a completely new computing machine is required. This is good news for hardware engineers. The self-driving car, for example cannot rely on the cloud, it must have multiple AI machines to solve problems in real time. Great article about the future of computing, thanks

  3. Yes, a new kind of computing machine is required. But AI and machine learning are more than computation. Data quality analysis and other modeling, inference and statistic analysis will most likely still be the job of the software engineers.

    I guess in the end the problem is still hardware-software partition/optimization. With AI, it seems the requirement of tight hardware/software optimization is higher, and more features will be implemented by the hardware. But I doubt this will change the software-dominant trend.

  4. And one very important thing is that existing so called design tools are in no way usable because they do nothing to help with connecting and randomly accessing data and functions embedded in HW networks.
    That is also after the world realizes that cache coherent data streaming memory systems have already hit the “memory wall”.
    Both cache and super scalar RISC are based on gaining performance by streaming blocks of data and using nearby data or partial results.
    Memory systems will need to interleave transfers of small pieces of data from many locations. The old is becoming new again — interleaved memory access.
    Just when HLS can almost handle computation intensive algorithms…

Leave a Reply

featured blogs
Dec 7, 2021
We explain the fundamentals of photonics, challenges in photonics research & design, and photonics applications including communications & photonic computing. The post Harnessing the Power of Light: Photonics in IC Design appeared first on From Silicon To Software....
Dec 7, 2021
Optimization is all about meeting requirements. In the last post , you read about how you can use measurements to optimize a circuit. This post will discuss the use of curve fitting to optimize a... [[ Click on the title to access the full blog on the Cadence Community site....
Dec 6, 2021
The scary thing is that this reminds me of the scurrilous ways in which I've been treated by members of the programming and IT communities over the years....
Nov 8, 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is 'Accelerating a Smart and Connected World.' This virtual event ...

featured video

PrimeShield Techtorial Series: Part 2 - Design Variation Analysis and Variation Robustness

Sponsored by Synopsys

To address increasing variations at advance nodes, design teams add guardband margins and signoff at higher sigma to manage risk, resulting in over-designing, thus paying higher PPA cost. Synopsys’ PrimeShield™ solutions’ innovative ML-driven statistical engine enables full statistical design variation analysis, lowers the overall pessimism, and in some cases catches potential risks of design optimism. In Part 2 of the PrimeShield Techtorial series, Synoposys covers Design Variation Analysis and Variation Robustness, giving a brief overview of each concept and discussing the value each of these brings in improving your power and performance.

Click here to learn more about PrimeShield

featured paper

Using the MAX66242 Mobile Application, the Basics

Sponsored by Analog Devices

This application note describes the basics of the near-field communication (NFC)/radio frequency identification (RFID) MAX66242EVKIT board and gives an application utilizing the NFC capabilities of iOS and Android® based mobile devices to exercise board functionality. It then demonstrates how the application enables use of memory and secure features in the MAX66242. It also shows how to use the MAX66242 with an onboard I2C temperature sensor, demonstrating the device's energy harvesting feature.

Click to read more

featured chalk talk

ROHM's KX132-1211 & KX134-1211 Accelerometers

Sponsored by Mouser Electronics and ROHM Semiconductor

Machine health monitoring is a key benefit in the Industry 4.0 revolution. Integrating data from sensors for vibration detection, motion detection, angle measurement and more can give a remarkably accurate picture of machine health, and timely warning of impending failure. In this episode of Chalk Talk, Amelia Dalton chats with Alex Chernyakov of ROHM Semiconductor about the key considerations in machine health monitoring, and how a new line of accelerometers for industrial applications can help.

Click here for more information about Kionix / ROHM Semiconductor KX134 & KX132 Tri-axis Digital Accelerometers