feature article
Subscribe Now

Digital Design Reinvented

AI Rewrites the Rules for Computing

It’s time to come clean. Let’s face it, digital design dudes and dudettes, we’ve been phoning it in.

It’s true. Those of us doing digital design have spent the last several decades designing one thing: the computer. Sure, we can do 4-bit, 8-bit, 16-bit, 32-bit, 64-bit, big-endian, little endian, RISC, CISC, memory-managed, pipelined, single- or multi-core… The variations and subtleties go on and on. But, at the end of the day, one truth emerges: we long ago passed responsibility for the creation of actual, useful products over to the software engineers.

It’s pretty convenient, drawing this nice protective box around ourselves. We just make the computers. Everybody else is responsible for what they do.

This is not true of analog design, for example. When analog electronic engineers designed a 1970s car stereo, they created a finished product. From the volume and tuning knobs to the antenna to the receiver section, filters, audio amplifiers – analog design draws from a large repertoire of skills and techniques, but each new system has its own DNA. Analog design has always been an art – where each new system could embody new techniques, new compromises, and a creative clean slate. With digital design, we just build whatever computer the software folks need and call it a day. If something is wrong in the final product, the problem is obviously the software. Those people should learn to do “real engineering” – right? When you live in an armored silo, it seems safe to throw stones. 

Programmable logic represents perhaps the pinnacle of this effect. With programmable logic, even hardware design becomes software. Sure, we use different languages, but hardware description languages are still software, synthesizers are compilers, and the wires and transistors in an FPGA are just what came from the factory. They don’t get re-designed for each new application. Got a problem with your FPGA design? Clearly it’s the code. 

The old adage says that when your only tool is a von Neumann machine, every problem looks like a nail – or something like that. But the digital design world is about to change, and the spoiler is neural networks. Artificial intelligence (AI) is not just another application. It is a new way of solving problems. The convergence of big data techniques and deep neural networks have created a new ecosystem for computation that virtually bypasses the entire existing von Neumann infrastructure. Sure, the first applications implemented AI as software on top of existing von Neumann-based computing infrastructure, but for the real applications of AI – from autonomous vehicles to robotics to an almost unimaginable variety of future applications – a completely new computing machine is required.

Today, GPUs, FPGAs, and a few purpose-built architectures are racing to grab real estate in this new computer. The battlefield is bifurcated, because the “training” phase of AI has very different computation requirements from the “inferencing” phase. In training, massive training data sets are analyzed. The training task is well-suited for data center situations, and it generally requires gobs of floating-point performance. GPUs are making strides in this area, as are FPGAs.  

The real payoff, though, is the inferencing engines. Inferencing (at this point in AI evolution, at least) requires very high performance in low-precision fixed-point computing, which is a different animal entirely. FPGAs seem to be leaders in the inferencing world at this point because of their fixed-point prowess. Inferencing is potentially the more lucrative socket to win, because training happens once at the beginning, but every system deployed in the field has to do inferencing. Consider autonomous driving – the socket is in the car, not in the data center at the automotive engineering company.

Inferencing requires a new kind of computer. In many cases, the latency, safety, and/or security requirements will preclude inferencing in the cloud. An autonomous car cannot afford to send a bunch of data up to a data center for analysis before deciding whether or not to apply the brakes or swerve. That decision has to be made locally, quickly, and reliably. Of course, it also has to be inexpensive and consume very little power.

This new AI inferencing computer is not simply some faster variant of a von Neumann machine. It is a different animal entirely. In order to meet the above requirements, it will have to rely very little on software and far more on native-hardware implementations of inferencing algorithms. And all indications are that the hardware architecture will be very application specific. There is not likely to be a one-size-fits-all AI inferencing machine any time in the near future.

Consider that many AI-driven applications will rely on hierarchically-nested structures of neural network machines. One AI system may be busy recognizing people, objects, and the environment. A completely different AI machine may be taking those results and analyzing them in terms of some context or task at hand – staying on the road and not hitting any pedestrians, for example. Still another may be processing the results of that in the context of some larger task such as navigation or minimizing traffic congestion. These nested tasks, each performed by specialized computers, do not lend themselves at this point to a standardized hardware architecture. 

To put it in simplest terms, digital designers need to get back to work. For the first time in decades, we need to get back closer to the application. We need to create new machines that are tailored to a specific task, rather than to a type of software. We need to take back some responsibility for the solution, rather than simply cranking out a new variant of our one component.

While FPGAs and GPUs are fighting for prominence in this new AI-driven world, nobody can seriously believe that either is a long-term solution. Someone will create an architecture that directly addresses the needs of AI, and that new architecture will need to demonstrate significant advantages over GPUs and FPGAs in order to thrive, but it almost certainly will happen.

One thing that is clear, though, is that software will not have the same role in the AI world as it did in the past. Software engineers will no longer be creating the algorithms that implement the solution. The intelligence of any application will be generated based on enormous training data sets, and the efficacy and differentiation will come from the quality of training, rather than the thoughtful application of software design techniques. In fact, because hardware will become more application specific, and software less so, we may see a virtual swapping of roles between hardware and software engineers in the success of new systems.

In most systems design teams today, the ratio of software to hardware engineers has skewed dramatically toward software – often in a ratio of 10:1 or more. And, in most system design projects, it is the software that is still the gating factor in schedules, quality, and performance. With AI, though, that ratio may well need to shift. When system design companies can no longer rely on buying a semi-generic computer and customizing it with software alone, there may well be an increase in demand for hardware engineers with the skills required to create custom computing architectures for specific AI applications. That is likely to turn today’s industry on its head. It will be interesting to watch.

5 thoughts on “Digital Design Reinvented”

  1. I don’t see any mention of data scientists in this blog. Surely the change is not going to be from s/w to h/w but from coding to modelling and training (In the sense of modelling the problem at hand and designing and training the network), both of which require a new skill set different to s/w or hardware engineering. I suspect also that the design of hardware ML engines is going to be done by a few specialist companies. Just now, there is a lot of “heat” out there in this field. Expect some cooling and consolidation as things mature….

  2. right, a completely new computing machine is required. This is good news for hardware engineers. The self-driving car, for example cannot rely on the cloud, it must have multiple AI machines to solve problems in real time. Great article about the future of computing, thanks

  3. Yes, a new kind of computing machine is required. But AI and machine learning are more than computation. Data quality analysis and other modeling, inference and statistic analysis will most likely still be the job of the software engineers.

    I guess in the end the problem is still hardware-software partition/optimization. With AI, it seems the requirement of tight hardware/software optimization is higher, and more features will be implemented by the hardware. But I doubt this will change the software-dominant trend.

  4. And one very important thing is that existing so called design tools are in no way usable because they do nothing to help with connecting and randomly accessing data and functions embedded in HW networks.
    That is also after the world realizes that cache coherent data streaming memory systems have already hit the “memory wall”.
    Both cache and super scalar RISC are based on gaining performance by streaming blocks of data and using nearby data or partial results.
    Memory systems will need to interleave transfers of small pieces of data from many locations. The old is becoming new again — interleaved memory access.
    Just when HLS can almost handle computation intensive algorithms…

Leave a Reply

featured blogs
Oct 21, 2020
You've traveled back in time 65 million years with no way to return. What evidence can you leave to ensure future humans will know of your existence?...
Oct 21, 2020
We'€™re concluding the Online Training Deep Dive blog series, which has been taking the top 15 Online Training courses among students and professors and breaking them down into their different... [[ Click on the title to access the full blog on the Cadence Community site. ...
Oct 20, 2020
In 2020, mobile traffic has skyrocketed everywhere as our planet battles a pandemic. Samtec.com saw nearly double the mobile traffic in the first two quarters than it normally sees. While these levels have dropped off from their peaks in the spring, they have not returned to ...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

See Inuitive’s NU4000 3D imaging and vision processor in action. The SoC supports high-quality 3D depth processor engine, SLAM accelerators, computer vision, and deep learning by integrating Synopsys ARC EV processor. In this demo, the NU4000 demonstrates simultaneous 3D sensing, SLAM and CNN functionality by mapping out its environment and localizing the sensor while identifying the objects within it. For more information, visit inuitive-tech.com.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Designing highly efficient, powerful and fast EV charging stations

Sponsored by Texas Instruments

Scaling the necessary power for fast EV charging stations can be challenging. One solution is to use modular power converters stacked in parallel. Learn more in our technical article.

Click here to download the technical article

Featured Chalk Talk

Amplifiers & Comparators Designed for Low Power, Precision

Sponsored by Mouser Electronics and ON Semiconductor

When choosing amplifiers and comparators for low-power, high-precision applications, it pays to have a broad understanding of the latest technology in op amps. There are new types of devices with significant advantages over the traditional go-to parts. In this episode of Chalk Talk, Amelia Dalton chats with Namrata Pandya of ON Semiconductor about choosing the best op amp for your application.

Click here for more information about ON Semiconductor High Performance CMOS Operational Amplifiers