feature article
Subscribe Now

Hug a Data Scientist

Time for a Sea Change in Computing

It’s been a long and mutually-productive relationship, but it’s time to break up.

For the five-decades-plus run of Moore’s Law, we electronics engineers have been married to software engineers. It’s been a love/hate relationship for sure, but together the two communities have achieved the single greatest accomplishment in the history of technology – the evolution of the modern computing infrastructure. During the five decades of collaboration, we have seen orders of magnitude of progress in every aspect of the global computing machine. On the hardware side, we have seen countless orders of magnitude of progress in parameters like processor performance, energy efficiency, size, cost, memory density, reliability, storage capacity, network bandwidth, reach, and latency.

On the software side, we’ve seen the evolution of languages, compilers, operating systems, development tools, and productivity, coupled with an immense repository of powerful algorithms that can efficiently perform practically any calculation under the sun. Our understanding of the nature of information, the solvability of problems, and the limitations of human capacity to handle complexity have made quantum leaps in this golden age of software engineering.

These advancements in hardware and software engineering have gone hand-in-hand, with bi-directional feedback loops where software engineers have set the demands for the next generation of hardware, and hardware engineers have enabled new methods and advancements in software. Software engineering was always the customer of hardware engineering, pushing and directing the evolution of hardware to serve its needs.

Now, however, we have come to a fork in the road, and this partnership that has flourished for decades has to make way for an interloper. Since about 2012, fueled by advances in computing hardware, progress in artificial intelligence (AI) has exploded. Some experts claim that more headway has been made in the past three years than in the entire history of AI prior to that. AI is now being adopted across a huge swath of applications such as machine vision, autonomous driving, speech and language interpretation, and almost countless other areas. 

This dramatic uptake in AI adoption has surfed atop significant advances in hardware and software technology, including heterogeneous computing with GPUs and FPGAs, networking, cloud-based computing, big data, sensor technology, and more. These advances have allowed AI to advance at unprecedented rates and have set the stage for a wide range of applications previously in (or beyond) the domain of conventional software engineering to shift to AI-based approaches.

For us as hardware engineers, however, moving forward from this point requires a change in our primary professional relationships. Artificial neural networks (ANN) require specialized hardware that is outside the scope of conventional von Neumann based computing. And different applications require different hardware. Some particularly powerful techniques such as long short-term memory (LSTM) used in recurrent neural networks (RNNs) practically demand specialized hardware compute engines – possibly on a per-application basis.

Simply put, the hardware needs of AI and the hardware needs of traditional software development are diverging in a big way. And, while traditional software development will likely require only an evolutionary improvement of current conventional computing hardware, AI-specific hardware is a different game entirely, and it will be in a massive state of flux over the coming decades. While “software” will continue to ask us for faster, smaller, cheaper, more efficient versions of the same basic architectures, AI will require revolutionary new types of logic that have not been designed before.

Data science (and AI) as a field has advanced considerably ahead of our ability to practically utilize its techniques. The complexities of choosing, applying, and adjusting the AI techniques used for any particular application will be far beyond the average hardware designer’s experience and understanding of AI, and the complex hardware architectures required to achieve optimal performance on those applications will be far outside the expertise of most data scientists. In order to advance AI at its maximum rate and to realize its potential, we as hardware engineers must form deep collaborative relationships with data scientists and AI experts similar to those we have had with software engineers for the past five decades. 

This does not imply that traditional software development has no role in the advancement of AI. In fact, most of the current implementations of AI and neural network techniques are based in traditional software. The adaptation to custom hardware generally comes far later, after the techniques have been software proven. But, in order for most AI applications to hit the realm of usability, custom hardware is required.

The immense amount of computation required for ANN implementation, and particularly for deep neural network (DNN) implementation on real-world applications is normally considered a cloud/data center problem. Today, cloud service suppliers are bolstering their offerings with GPU and FPGA acceleration to assist in both the training and inferencing tasks in DNNs. But, for many of the most interesting applications (such as autonomous driving, for example), the latency and connectivity requirements make cloud-based solutions impractical. For these situations, local DNN implementation (with the inferencing part at a minimum) must be done at the local level, with what are essentially embedded AI engines. Apple, for example, in their iPhone X face recognition chose to use an on-board AI engine in order to avoid sending sensitive identification data over the network, and instead used their embedded AI processor to keep the information local. 

Over the past several years, much of digital hardware design has devolved into an integration and verification task. Powerful libraries of IP, rich sets of reference designs, and well-defined standardized interfaces have reduced the amount of detailed logic design required of the typical hardware engineer to almost nil. Our days are more often spent plugging together large pre-engineered blocks and then debugging and tweaking issues with interfaces as well as system-wide concerns such as power, performance, form factor, and cost.

But the coming AI revolution is likely to change all that. In order to keep up with rapidly-evolving techniques for native hardware AI implementations, we’ll be required to break away from our dependence on off-the-shelf, pre-engineered hardware and re-immerse ourselves in the art of detailed design. For FPGA and ASIC designers, a whole new land of opportunity emerges beyond the current “design yet another slightly-specialized SoC” mantra. It will be an exciting and trying time. We’ll change the world, yet again.

One thought on “Hug a Data Scientist”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Infineon and Mouser introduction to Reliable Solid State Isolators
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Amelia Dalton and Daniel Callen Jr. from Infineon explore trends in solid state isolator and relay solutions, the benefits that Infineon’s SSI solutions bring to the table, and how you can get started using these solutions for your next design. 
May 28, 2024
36,505 views