feature article
Subscribe Now

Digital Design Reinvented

AI Rewrites the Rules for Computing

It’s time to come clean. Let’s face it, digital design dudes and dudettes, we’ve been phoning it in.

It’s true. Those of us doing digital design have spent the last several decades designing one thing: the computer. Sure, we can do 4-bit, 8-bit, 16-bit, 32-bit, 64-bit, big-endian, little endian, RISC, CISC, memory-managed, pipelined, single- or multi-core… The variations and subtleties go on and on. But, at the end of the day, one truth emerges: we long ago passed responsibility for the creation of actual, useful products over to the software engineers.

It’s pretty convenient, drawing this nice protective box around ourselves. We just make the computers. Everybody else is responsible for what they do.

This is not true of analog design, for example. When analog electronic engineers designed a 1970s car stereo, they created a finished product. From the volume and tuning knobs to the antenna to the receiver section, filters, audio amplifiers – analog design draws from a large repertoire of skills and techniques, but each new system has its own DNA. Analog design has always been an art – where each new system could embody new techniques, new compromises, and a creative clean slate. With digital design, we just build whatever computer the software folks need and call it a day. If something is wrong in the final product, the problem is obviously the software. Those people should learn to do “real engineering” – right? When you live in an armored silo, it seems safe to throw stones. 

Programmable logic represents perhaps the pinnacle of this effect. With programmable logic, even hardware design becomes software. Sure, we use different languages, but hardware description languages are still software, synthesizers are compilers, and the wires and transistors in an FPGA are just what came from the factory. They don’t get re-designed for each new application. Got a problem with your FPGA design? Clearly it’s the code. 

The old adage says that when your only tool is a von Neumann machine, every problem looks like a nail – or something like that. But the digital design world is about to change, and the spoiler is neural networks. Artificial intelligence (AI) is not just another application. It is a new way of solving problems. The convergence of big data techniques and deep neural networks have created a new ecosystem for computation that virtually bypasses the entire existing von Neumann infrastructure. Sure, the first applications implemented AI as software on top of existing von Neumann-based computing infrastructure, but for the real applications of AI – from autonomous vehicles to robotics to an almost unimaginable variety of future applications – a completely new computing machine is required.

Today, GPUs, FPGAs, and a few purpose-built architectures are racing to grab real estate in this new computer. The battlefield is bifurcated, because the “training” phase of AI has very different computation requirements from the “inferencing” phase. In training, massive training data sets are analyzed. The training task is well-suited for data center situations, and it generally requires gobs of floating-point performance. GPUs are making strides in this area, as are FPGAs.  

The real payoff, though, is the inferencing engines. Inferencing (at this point in AI evolution, at least) requires very high performance in low-precision fixed-point computing, which is a different animal entirely. FPGAs seem to be leaders in the inferencing world at this point because of their fixed-point prowess. Inferencing is potentially the more lucrative socket to win, because training happens once at the beginning, but every system deployed in the field has to do inferencing. Consider autonomous driving – the socket is in the car, not in the data center at the automotive engineering company.

Inferencing requires a new kind of computer. In many cases, the latency, safety, and/or security requirements will preclude inferencing in the cloud. An autonomous car cannot afford to send a bunch of data up to a data center for analysis before deciding whether or not to apply the brakes or swerve. That decision has to be made locally, quickly, and reliably. Of course, it also has to be inexpensive and consume very little power.

This new AI inferencing computer is not simply some faster variant of a von Neumann machine. It is a different animal entirely. In order to meet the above requirements, it will have to rely very little on software and far more on native-hardware implementations of inferencing algorithms. And all indications are that the hardware architecture will be very application specific. There is not likely to be a one-size-fits-all AI inferencing machine any time in the near future.

Consider that many AI-driven applications will rely on hierarchically-nested structures of neural network machines. One AI system may be busy recognizing people, objects, and the environment. A completely different AI machine may be taking those results and analyzing them in terms of some context or task at hand – staying on the road and not hitting any pedestrians, for example. Still another may be processing the results of that in the context of some larger task such as navigation or minimizing traffic congestion. These nested tasks, each performed by specialized computers, do not lend themselves at this point to a standardized hardware architecture. 

To put it in simplest terms, digital designers need to get back to work. For the first time in decades, we need to get back closer to the application. We need to create new machines that are tailored to a specific task, rather than to a type of software. We need to take back some responsibility for the solution, rather than simply cranking out a new variant of our one component.

While FPGAs and GPUs are fighting for prominence in this new AI-driven world, nobody can seriously believe that either is a long-term solution. Someone will create an architecture that directly addresses the needs of AI, and that new architecture will need to demonstrate significant advantages over GPUs and FPGAs in order to thrive, but it almost certainly will happen.

One thing that is clear, though, is that software will not have the same role in the AI world as it did in the past. Software engineers will no longer be creating the algorithms that implement the solution. The intelligence of any application will be generated based on enormous training data sets, and the efficacy and differentiation will come from the quality of training, rather than the thoughtful application of software design techniques. In fact, because hardware will become more application specific, and software less so, we may see a virtual swapping of roles between hardware and software engineers in the success of new systems.

In most systems design teams today, the ratio of software to hardware engineers has skewed dramatically toward software – often in a ratio of 10:1 or more. And, in most system design projects, it is the software that is still the gating factor in schedules, quality, and performance. With AI, though, that ratio may well need to shift. When system design companies can no longer rely on buying a semi-generic computer and customizing it with software alone, there may well be an increase in demand for hardware engineers with the skills required to create custom computing architectures for specific AI applications. That is likely to turn today’s industry on its head. It will be interesting to watch.

5 thoughts on “Digital Design Reinvented”

  1. I don’t see any mention of data scientists in this blog. Surely the change is not going to be from s/w to h/w but from coding to modelling and training (In the sense of modelling the problem at hand and designing and training the network), both of which require a new skill set different to s/w or hardware engineering. I suspect also that the design of hardware ML engines is going to be done by a few specialist companies. Just now, there is a lot of “heat” out there in this field. Expect some cooling and consolidation as things mature….

  2. right, a completely new computing machine is required. This is good news for hardware engineers. The self-driving car, for example cannot rely on the cloud, it must have multiple AI machines to solve problems in real time. Great article about the future of computing, thanks

  3. Yes, a new kind of computing machine is required. But AI and machine learning are more than computation. Data quality analysis and other modeling, inference and statistic analysis will most likely still be the job of the software engineers.

    I guess in the end the problem is still hardware-software partition/optimization. With AI, it seems the requirement of tight hardware/software optimization is higher, and more features will be implemented by the hardware. But I doubt this will change the software-dominant trend.

  4. And one very important thing is that existing so called design tools are in no way usable because they do nothing to help with connecting and randomly accessing data and functions embedded in HW networks.
    That is also after the world realizes that cache coherent data streaming memory systems have already hit the “memory wall”.
    Both cache and super scalar RISC are based on gaining performance by streaming blocks of data and using nearby data or partial results.
    Memory systems will need to interleave transfers of small pieces of data from many locations. The old is becoming new again — interleaved memory access.
    Just when HLS can almost handle computation intensive algorithms…

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Autonomous Mobile Robots
Sponsored by Mouser Electronics and onsemi
Robotic applications are now commonplace in a variety of segments in society and are growing in number each day. In this episode of Chalk Talk, Amelia Dalton and Alessandro Maggioni from onsemi discuss the details, functions, and benefits of autonomous mobile robots. They also examine the performance parameters of these kinds of robotic designs, the five main subsystems included in autonomous mobile robots, and how onsemi is furthering innovation in this arena.
Jan 24, 2024
12,611 views