feature article
Subscribe Now

Hug a Data Scientist

Time for a Sea Change in Computing

It’s been a long and mutually-productive relationship, but it’s time to break up.

For the five-decades-plus run of Moore’s Law, we electronics engineers have been married to software engineers. It’s been a love/hate relationship for sure, but together the two communities have achieved the single greatest accomplishment in the history of technology – the evolution of the modern computing infrastructure. During the five decades of collaboration, we have seen orders of magnitude of progress in every aspect of the global computing machine. On the hardware side, we have seen countless orders of magnitude of progress in parameters like processor performance, energy efficiency, size, cost, memory density, reliability, storage capacity, network bandwidth, reach, and latency.

On the software side, we’ve seen the evolution of languages, compilers, operating systems, development tools, and productivity, coupled with an immense repository of powerful algorithms that can efficiently perform practically any calculation under the sun. Our understanding of the nature of information, the solvability of problems, and the limitations of human capacity to handle complexity have made quantum leaps in this golden age of software engineering.

These advancements in hardware and software engineering have gone hand-in-hand, with bi-directional feedback loops where software engineers have set the demands for the next generation of hardware, and hardware engineers have enabled new methods and advancements in software. Software engineering was always the customer of hardware engineering, pushing and directing the evolution of hardware to serve its needs.

Now, however, we have come to a fork in the road, and this partnership that has flourished for decades has to make way for an interloper. Since about 2012, fueled by advances in computing hardware, progress in artificial intelligence (AI) has exploded. Some experts claim that more headway has been made in the past three years than in the entire history of AI prior to that. AI is now being adopted across a huge swath of applications such as machine vision, autonomous driving, speech and language interpretation, and almost countless other areas. 

This dramatic uptake in AI adoption has surfed atop significant advances in hardware and software technology, including heterogeneous computing with GPUs and FPGAs, networking, cloud-based computing, big data, sensor technology, and more. These advances have allowed AI to advance at unprecedented rates and have set the stage for a wide range of applications previously in (or beyond) the domain of conventional software engineering to shift to AI-based approaches.

For us as hardware engineers, however, moving forward from this point requires a change in our primary professional relationships. Artificial neural networks (ANN) require specialized hardware that is outside the scope of conventional von Neumann based computing. And different applications require different hardware. Some particularly powerful techniques such as long short-term memory (LSTM) used in recurrent neural networks (RNNs) practically demand specialized hardware compute engines – possibly on a per-application basis.

Simply put, the hardware needs of AI and the hardware needs of traditional software development are diverging in a big way. And, while traditional software development will likely require only an evolutionary improvement of current conventional computing hardware, AI-specific hardware is a different game entirely, and it will be in a massive state of flux over the coming decades. While “software” will continue to ask us for faster, smaller, cheaper, more efficient versions of the same basic architectures, AI will require revolutionary new types of logic that have not been designed before.

Data science (and AI) as a field has advanced considerably ahead of our ability to practically utilize its techniques. The complexities of choosing, applying, and adjusting the AI techniques used for any particular application will be far beyond the average hardware designer’s experience and understanding of AI, and the complex hardware architectures required to achieve optimal performance on those applications will be far outside the expertise of most data scientists. In order to advance AI at its maximum rate and to realize its potential, we as hardware engineers must form deep collaborative relationships with data scientists and AI experts similar to those we have had with software engineers for the past five decades. 

This does not imply that traditional software development has no role in the advancement of AI. In fact, most of the current implementations of AI and neural network techniques are based in traditional software. The adaptation to custom hardware generally comes far later, after the techniques have been software proven. But, in order for most AI applications to hit the realm of usability, custom hardware is required.

The immense amount of computation required for ANN implementation, and particularly for deep neural network (DNN) implementation on real-world applications is normally considered a cloud/data center problem. Today, cloud service suppliers are bolstering their offerings with GPU and FPGA acceleration to assist in both the training and inferencing tasks in DNNs. But, for many of the most interesting applications (such as autonomous driving, for example), the latency and connectivity requirements make cloud-based solutions impractical. For these situations, local DNN implementation (with the inferencing part at a minimum) must be done at the local level, with what are essentially embedded AI engines. Apple, for example, in their iPhone X face recognition chose to use an on-board AI engine in order to avoid sending sensitive identification data over the network, and instead used their embedded AI processor to keep the information local. 

Over the past several years, much of digital hardware design has devolved into an integration and verification task. Powerful libraries of IP, rich sets of reference designs, and well-defined standardized interfaces have reduced the amount of detailed logic design required of the typical hardware engineer to almost nil. Our days are more often spent plugging together large pre-engineered blocks and then debugging and tweaking issues with interfaces as well as system-wide concerns such as power, performance, form factor, and cost.

But the coming AI revolution is likely to change all that. In order to keep up with rapidly-evolving techniques for native hardware AI implementations, we’ll be required to break away from our dependence on off-the-shelf, pre-engineered hardware and re-immerse ourselves in the art of detailed design. For FPGA and ASIC designers, a whole new land of opportunity emerges beyond the current “design yet another slightly-specialized SoC” mantra. It will be an exciting and trying time. We’ll change the world, yet again.

One thought on “Hug a Data Scientist”

Leave a Reply

featured blogs
Aug 20, 2018
Xilinx is holding three Developer Forums later this year and registration for the two October events is now open. The US event is being held at the Fairmont Hotel in downtown San Jose on October 1-2. The Beijing event is being held at the Beijing International Hotel on Octobe...
Aug 20, 2018
'€œCircle the wagons.'€ We can find wisdom in these Pilgrim words. The majority of multi-layer printed circuit boards feature at least one, and often a few or several layers that are a ground pour. The b...
Aug 20, 2018
Last summer, I took Fridays to write about technology museums. I planned to do a series on Fridays this summer on the odd jobs that I have done in my life before I started what you might consider my real career. But then Cadence Cloud took precedence. But now it is the dog-da...
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...