feature article
Subscribe Now

Watching the Weightless

Integration Transcends Moore’s Law

We engineers like things we can count and measure. Our professional comfort zone is in the metrics, multipliers, and deltas – hard data on which we can rest our reassurances that we’ve progressed – solved the world’s problems one bit at a time. Moore’s Law, of course, is a solid long-arc barometer for the engineering collective – a sequence of numbers on which we can hang our collective hats – carving notches in the stock as we trouble our way toward infinity. And, at some level it is the basis on which we’ve built our businesses, seeking remuneration for our engineering achievements.

Quietly, though, that framework has shifted. While our hypnotic stare has been affixed to the exponentially growing number of transistors, our currency has changed. The limit, it seems, as our principal metric approaches infinity is a new term – the result of a different function. For decades, the engineers at the foundation of the technology pyramid created components – integrated arrays of transistors that combined in new and interesting ways to provide higher-level functions, which in turn were arrayed to produce boards, devices, and (when overlaid with software) systems. These systems solved end-user problems.

As IoT has taken center stage, however, a different tide is moving us forward. No longer is the tally of transistors a valid measure of the technological milestone we’ve achieved. Integrated stacks of silicon and software embody a collective experience that has snowballed through the decades in a sum-is-exponentially-greater than the parts equation that defies any attempt at measurement.

Whoa! What does that even mean?

A press release crossed my desk recently from Intel, announcing that the company had collaborated with NEC in the creation of NEC’s NeoFace facial recognition engine. The release explains that NeoFace, on the hardware side, combines FPGA acceleration with conventional processing elements to deliver impressive performance in facial recognition benchmarks by the U.S. National Institute of Standards and Technology (NIST). The release says:

“The NIST tests evaluated the accuracy of the technology in two real-life test scenarios including a test for entry-exit management at an airport passenger gate. It determined whether and how well the engine could recognize people as they walked through an area one at a time without stopping or looking at the camera. NEC’s face recognition technology won first place with a matching accuracy of 99.2 percent. The error rate of 0.8 percent is less than one-fourth the second-place error rate.”

Obviously this is a tremendous technological achievement – identifying 99.2 percent of people in real time from a 4K video stream as they walk through a complex environment without stopping or pausing, and with diverse lighting and other environmental challenges is shockingly close to “magic”. The technology behind NeoFace is daunting. It packs extreme computational power provided by heterogeneous computing with FPGAs doing the heavy lifting as hardware accelerators for the recognition algorithms. It fuses machine learning, heterogeneous computing, big data, video processing, and a host of other esoteric technologies into a vision solution that achieves what seems to be the impossible. And, even though it might outperform a human in real-time identification of a random collection of people, it is managing only a tiny sliver of what true human vision can accomplish.

Put another way, NeoFace represents but one narrow slice of the rapidly exploding spectrum of embedded vision. As we work our way up the capability chain, we will undoubtedly see strong trends emerge in the hardware, software and methodologies used to achieve machine vision, but for now each real-world application will most likely be uniquely focused. The “what’s going on in this scene” test for intelligent vision demands a far greater awareness of context than anything we’ve seen thus far.

But, NeoFace is relevant to our discussion in another way. It’s the shape of solutions to come. It represents an emerging era where the number of transistors or the number of lines of code embodied in a solution convey little relevant information about how advanced or sophisticated the solution is. NeoFace definitely contains billions of transistors, but so does your average RAM array. NeoFace definitely embodies millions of lines of source code, but so do any number of dull, pedestrian software systems. No metric we have yet defines characterizes the sophistication of solutions that integrate the state-of-the-art in transistor density, compute acceleration, software development, machine learning, big data, and on and on.

As the number of transistors and the number of lines approach infinity, we definitely hit a vanishing point in their usefulness as metrics. Something else defines the sophistication of our solution. As applications start to develop and improve themselves as a result of their deployment history, even the number of hours of engineering investment in developing a system become almost moot. We lose the ability to measure what we’ve created in the usual ways.

This loss of metrics is frightening because it affects the nature of what we sell. IoT hasn’t just moved us from selling stand-alone systems to selling distributed networked systems. It fundamentally changes our value proposition. There really is no meaningful, sustainable way to sell IoT as “devices” or “systems”. The thing we are actually selling is “capability”. With NeoFace, NEC its selling the capability to recognize human faces. Yes, there are Xeon processors and FPGAs and software and boards and modules and memory and data – but none of that has any real meaning out of the context of the capability.

As system integrators, we will move more and more to stitching systems together by acquiring and integrating “capabilities” rather than components. And, capabilities-as-services may well be a dominant business model in the future. But, as we create technology that can perform increasingly sophisticated tasks previously reserved only for humans, and as we begin to endow that technology with super-human capabilities, it will become increasingly difficult to put a fence of ownership around a particular piece of hardware, software, data, or IP. The capability itself will have transcended decomposition, and therefore ownership. While the practice of engineering may not change, the economics and the product of it most certainly will.

Leave a Reply

featured blogs
Dec 7, 2023
Building on the success of previous years, the 2024 edition of the DATE (Design, Automation and Test in Europe) conference will once again include the Young People Programme. The largest electronic design automation (EDA) conference in Europe, DATE will be held on 25-27 March...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Nexperia Energy Harvesting Solutions
Sponsored by Mouser Electronics and Nexperia
Energy harvesting is a great way to ensure a sustainable future of electronics by eliminating batteries and e-waste. In this episode of Chalk Talk, Amelia Dalton and Rodrigo Mesquita from Nexperia explore the process of designing in energy harvesting and why Nexperia’s inductor-less PMICs are an energy harvesting game changer for wearable technology, sensor-based applications, and more!
May 9, 2023
26,294 views