feature article
Subscribe Now

Hug a Data Scientist

Time for a Sea Change in Computing

It’s been a long and mutually-productive relationship, but it’s time to break up.

For the five-decades-plus run of Moore’s Law, we electronics engineers have been married to software engineers. It’s been a love/hate relationship for sure, but together the two communities have achieved the single greatest accomplishment in the history of technology – the evolution of the modern computing infrastructure. During the five decades of collaboration, we have seen orders of magnitude of progress in every aspect of the global computing machine. On the hardware side, we have seen countless orders of magnitude of progress in parameters like processor performance, energy efficiency, size, cost, memory density, reliability, storage capacity, network bandwidth, reach, and latency.

On the software side, we’ve seen the evolution of languages, compilers, operating systems, development tools, and productivity, coupled with an immense repository of powerful algorithms that can efficiently perform practically any calculation under the sun. Our understanding of the nature of information, the solvability of problems, and the limitations of human capacity to handle complexity have made quantum leaps in this golden age of software engineering.

These advancements in hardware and software engineering have gone hand-in-hand, with bi-directional feedback loops where software engineers have set the demands for the next generation of hardware, and hardware engineers have enabled new methods and advancements in software. Software engineering was always the customer of hardware engineering, pushing and directing the evolution of hardware to serve its needs.

Now, however, we have come to a fork in the road, and this partnership that has flourished for decades has to make way for an interloper. Since about 2012, fueled by advances in computing hardware, progress in artificial intelligence (AI) has exploded. Some experts claim that more headway has been made in the past three years than in the entire history of AI prior to that. AI is now being adopted across a huge swath of applications such as machine vision, autonomous driving, speech and language interpretation, and almost countless other areas. 

This dramatic uptake in AI adoption has surfed atop significant advances in hardware and software technology, including heterogeneous computing with GPUs and FPGAs, networking, cloud-based computing, big data, sensor technology, and more. These advances have allowed AI to advance at unprecedented rates and have set the stage for a wide range of applications previously in (or beyond) the domain of conventional software engineering to shift to AI-based approaches.

For us as hardware engineers, however, moving forward from this point requires a change in our primary professional relationships. Artificial neural networks (ANN) require specialized hardware that is outside the scope of conventional von Neumann based computing. And different applications require different hardware. Some particularly powerful techniques such as long short-term memory (LSTM) used in recurrent neural networks (RNNs) practically demand specialized hardware compute engines – possibly on a per-application basis.

Simply put, the hardware needs of AI and the hardware needs of traditional software development are diverging in a big way. And, while traditional software development will likely require only an evolutionary improvement of current conventional computing hardware, AI-specific hardware is a different game entirely, and it will be in a massive state of flux over the coming decades. While “software” will continue to ask us for faster, smaller, cheaper, more efficient versions of the same basic architectures, AI will require revolutionary new types of logic that have not been designed before.

Data science (and AI) as a field has advanced considerably ahead of our ability to practically utilize its techniques. The complexities of choosing, applying, and adjusting the AI techniques used for any particular application will be far beyond the average hardware designer’s experience and understanding of AI, and the complex hardware architectures required to achieve optimal performance on those applications will be far outside the expertise of most data scientists. In order to advance AI at its maximum rate and to realize its potential, we as hardware engineers must form deep collaborative relationships with data scientists and AI experts similar to those we have had with software engineers for the past five decades. 

This does not imply that traditional software development has no role in the advancement of AI. In fact, most of the current implementations of AI and neural network techniques are based in traditional software. The adaptation to custom hardware generally comes far later, after the techniques have been software proven. But, in order for most AI applications to hit the realm of usability, custom hardware is required.

The immense amount of computation required for ANN implementation, and particularly for deep neural network (DNN) implementation on real-world applications is normally considered a cloud/data center problem. Today, cloud service suppliers are bolstering their offerings with GPU and FPGA acceleration to assist in both the training and inferencing tasks in DNNs. But, for many of the most interesting applications (such as autonomous driving, for example), the latency and connectivity requirements make cloud-based solutions impractical. For these situations, local DNN implementation (with the inferencing part at a minimum) must be done at the local level, with what are essentially embedded AI engines. Apple, for example, in their iPhone X face recognition chose to use an on-board AI engine in order to avoid sending sensitive identification data over the network, and instead used their embedded AI processor to keep the information local. 

Over the past several years, much of digital hardware design has devolved into an integration and verification task. Powerful libraries of IP, rich sets of reference designs, and well-defined standardized interfaces have reduced the amount of detailed logic design required of the typical hardware engineer to almost nil. Our days are more often spent plugging together large pre-engineered blocks and then debugging and tweaking issues with interfaces as well as system-wide concerns such as power, performance, form factor, and cost.

But the coming AI revolution is likely to change all that. In order to keep up with rapidly-evolving techniques for native hardware AI implementations, we’ll be required to break away from our dependence on off-the-shelf, pre-engineered hardware and re-immerse ourselves in the art of detailed design. For FPGA and ASIC designers, a whole new land of opportunity emerges beyond the current “design yet another slightly-specialized SoC” mantra. It will be an exciting and trying time. We’ll change the world, yet again.

One thought on “Hug a Data Scientist”

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Automotive/Industrial PSoC™ High Voltage (HV) Overview
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Amelia Dalton and Marcelo Williams Silva from Infineon explore the multitude of benefits of Infineon’s PSoC 4 microcontroller family. They examine how the high precision analog blocks, high voltage subsystem, and integrated communication interfaces of these solutions can make a big difference when it comes to the footprint size, bill of materials and functional safety of your next automotive design.
Sep 12, 2023
27,794 views