feature article
Subscribe Now

Hug a Data Scientist

Time for a Sea Change in Computing

It’s been a long and mutually-productive relationship, but it’s time to break up.

For the five-decades-plus run of Moore’s Law, we electronics engineers have been married to software engineers. It’s been a love/hate relationship for sure, but together the two communities have achieved the single greatest accomplishment in the history of technology – the evolution of the modern computing infrastructure. During the five decades of collaboration, we have seen orders of magnitude of progress in every aspect of the global computing machine. On the hardware side, we have seen countless orders of magnitude of progress in parameters like processor performance, energy efficiency, size, cost, memory density, reliability, storage capacity, network bandwidth, reach, and latency.

On the software side, we’ve seen the evolution of languages, compilers, operating systems, development tools, and productivity, coupled with an immense repository of powerful algorithms that can efficiently perform practically any calculation under the sun. Our understanding of the nature of information, the solvability of problems, and the limitations of human capacity to handle complexity have made quantum leaps in this golden age of software engineering.

These advancements in hardware and software engineering have gone hand-in-hand, with bi-directional feedback loops where software engineers have set the demands for the next generation of hardware, and hardware engineers have enabled new methods and advancements in software. Software engineering was always the customer of hardware engineering, pushing and directing the evolution of hardware to serve its needs.

Now, however, we have come to a fork in the road, and this partnership that has flourished for decades has to make way for an interloper. Since about 2012, fueled by advances in computing hardware, progress in artificial intelligence (AI) has exploded. Some experts claim that more headway has been made in the past three years than in the entire history of AI prior to that. AI is now being adopted across a huge swath of applications such as machine vision, autonomous driving, speech and language interpretation, and almost countless other areas. 

This dramatic uptake in AI adoption has surfed atop significant advances in hardware and software technology, including heterogeneous computing with GPUs and FPGAs, networking, cloud-based computing, big data, sensor technology, and more. These advances have allowed AI to advance at unprecedented rates and have set the stage for a wide range of applications previously in (or beyond) the domain of conventional software engineering to shift to AI-based approaches.

For us as hardware engineers, however, moving forward from this point requires a change in our primary professional relationships. Artificial neural networks (ANN) require specialized hardware that is outside the scope of conventional von Neumann based computing. And different applications require different hardware. Some particularly powerful techniques such as long short-term memory (LSTM) used in recurrent neural networks (RNNs) practically demand specialized hardware compute engines – possibly on a per-application basis.

Simply put, the hardware needs of AI and the hardware needs of traditional software development are diverging in a big way. And, while traditional software development will likely require only an evolutionary improvement of current conventional computing hardware, AI-specific hardware is a different game entirely, and it will be in a massive state of flux over the coming decades. While “software” will continue to ask us for faster, smaller, cheaper, more efficient versions of the same basic architectures, AI will require revolutionary new types of logic that have not been designed before.

Data science (and AI) as a field has advanced considerably ahead of our ability to practically utilize its techniques. The complexities of choosing, applying, and adjusting the AI techniques used for any particular application will be far beyond the average hardware designer’s experience and understanding of AI, and the complex hardware architectures required to achieve optimal performance on those applications will be far outside the expertise of most data scientists. In order to advance AI at its maximum rate and to realize its potential, we as hardware engineers must form deep collaborative relationships with data scientists and AI experts similar to those we have had with software engineers for the past five decades. 

This does not imply that traditional software development has no role in the advancement of AI. In fact, most of the current implementations of AI and neural network techniques are based in traditional software. The adaptation to custom hardware generally comes far later, after the techniques have been software proven. But, in order for most AI applications to hit the realm of usability, custom hardware is required.

The immense amount of computation required for ANN implementation, and particularly for deep neural network (DNN) implementation on real-world applications is normally considered a cloud/data center problem. Today, cloud service suppliers are bolstering their offerings with GPU and FPGA acceleration to assist in both the training and inferencing tasks in DNNs. But, for many of the most interesting applications (such as autonomous driving, for example), the latency and connectivity requirements make cloud-based solutions impractical. For these situations, local DNN implementation (with the inferencing part at a minimum) must be done at the local level, with what are essentially embedded AI engines. Apple, for example, in their iPhone X face recognition chose to use an on-board AI engine in order to avoid sending sensitive identification data over the network, and instead used their embedded AI processor to keep the information local. 

Over the past several years, much of digital hardware design has devolved into an integration and verification task. Powerful libraries of IP, rich sets of reference designs, and well-defined standardized interfaces have reduced the amount of detailed logic design required of the typical hardware engineer to almost nil. Our days are more often spent plugging together large pre-engineered blocks and then debugging and tweaking issues with interfaces as well as system-wide concerns such as power, performance, form factor, and cost.

But the coming AI revolution is likely to change all that. In order to keep up with rapidly-evolving techniques for native hardware AI implementations, we’ll be required to break away from our dependence on off-the-shelf, pre-engineered hardware and re-immerse ourselves in the art of detailed design. For FPGA and ASIC designers, a whole new land of opportunity emerges beyond the current “design yet another slightly-specialized SoC” mantra. It will be an exciting and trying time. We’ll change the world, yet again.

One thought on “Hug a Data Scientist”

Leave a Reply

featured blogs
Jun 22, 2021
Have you ever been in a situation where the run has started and you realize that you needed to add two more workers, or drop a couple of them? In such cases, you wait for the run to complete, make... [[ Click on the title to access the full blog on the Cadence Community site...
Jun 21, 2021
By James Paris Last Saturday was my son's birthday and we had many things to… The post Time is money'¦so why waste it on bad data? appeared first on Design with Calibre....
Jun 17, 2021
Learn how cloud-based SoC design and functional verification systems such as ZeBu Cloud accelerate networking SoC readiness across both hardware & software. The post The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Clou...
Jun 17, 2021
In today’s blog episode, we would like to introduce our newest White Paper: “System and Component qualifications of VPX solutions, Create a novel, low-cost, easy to build, high reliability test platform for VPX modules“. Over the past year, Samtec has worked...

featured video

Reduce Analog and Mixed-Signal Design Risk with a Unified Design and Simulation Solution

Sponsored by Cadence Design Systems

Learn how you can reduce your cost and risk with the Virtuoso and Spectre unified analog and mixed-signal design and simulation solution, offering accuracy, capacity, and high performance.

Click here for more information about Spectre FX Simulator

featured paper

4 common questions when isolating signal and power

Sponsored by Texas Instruments

A high-voltage circuit design requires isolation to protect human operators, enable communication to lower-voltage circuitry and eliminate unwanted noise within the system. Many options are available when designing a power supply for digitally isolated circuits including; flyback, H-bridge LLC, push-pull, and integrated isolated data and power solutions. This article explores common questions when isolating signal and power in a design as well as a brief overview of available power solutions.

Click to read more

featured chalk talk

ROHM Gate Drivers

Sponsored by Mouser Electronics and ROHM Semiconductor

Today’s rapid growth of power and motor control applications demands a fresh look at gate driver technology. Recent advances in gate drivers help designers hit new levels of efficiency and performance in their designs. In this episode of Chalk Talk, Amelia Dalton chats with Mitch Van Ochten of ROHM about the latest in isolated and non-isolated gate driver solutions.

Click here for more information about ROHM Semiconductor Automotive Gate Drivers