feature article
Subscribe to EE Journal Daily Newsletter

Watching the Weightless

Integration Transcends Moore’s Law

We engineers like things we can count and measure. Our professional comfort zone is in the metrics, multipliers, and deltas – hard data on which we can rest our reassurances that we’ve progressed – solved the world’s problems one bit at a time. Moore’s Law, of course, is a solid long-arc barometer for the engineering collective – a sequence of numbers on which we can hang our collective hats – carving notches in the stock as we trouble our way toward infinity. And, at some level it is the basis on which we’ve built our businesses, seeking remuneration for our engineering achievements.

Quietly, though, that framework has shifted. While our hypnotic stare has been affixed to the exponentially growing number of transistors, our currency has changed. The limit, it seems, as our principal metric approaches infinity is a new term – the result of a different function. For decades, the engineers at the foundation of the technology pyramid created components – integrated arrays of transistors that combined in new and interesting ways to provide higher-level functions, which in turn were arrayed to produce boards, devices, and (when overlaid with software) systems. These systems solved end-user problems.

As IoT has taken center stage, however, a different tide is moving us forward. No longer is the tally of transistors a valid measure of the technological milestone we’ve achieved. Integrated stacks of silicon and software embody a collective experience that has snowballed through the decades in a sum-is-exponentially-greater than the parts equation that defies any attempt at measurement.

Whoa! What does that even mean?

A press release crossed my desk recently from Intel, announcing that the company had collaborated with NEC in the creation of NEC’s NeoFace facial recognition engine. The release explains that NeoFace, on the hardware side, combines FPGA acceleration with conventional processing elements to deliver impressive performance in facial recognition benchmarks by the U.S. National Institute of Standards and Technology (NIST). The release says:

“The NIST tests evaluated the accuracy of the technology in two real-life test scenarios including a test for entry-exit management at an airport passenger gate. It determined whether and how well the engine could recognize people as they walked through an area one at a time without stopping or looking at the camera. NEC’s face recognition technology won first place with a matching accuracy of 99.2 percent. The error rate of 0.8 percent is less than one-fourth the second-place error rate.”

Obviously this is a tremendous technological achievement – identifying 99.2 percent of people in real time from a 4K video stream as they walk through a complex environment without stopping or pausing, and with diverse lighting and other environmental challenges is shockingly close to “magic”. The technology behind NeoFace is daunting. It packs extreme computational power provided by heterogeneous computing with FPGAs doing the heavy lifting as hardware accelerators for the recognition algorithms. It fuses machine learning, heterogeneous computing, big data, video processing, and a host of other esoteric technologies into a vision solution that achieves what seems to be the impossible. And, even though it might outperform a human in real-time identification of a random collection of people, it is managing only a tiny sliver of what true human vision can accomplish.

Put another way, NeoFace represents but one narrow slice of the rapidly exploding spectrum of embedded vision. As we work our way up the capability chain, we will undoubtedly see strong trends emerge in the hardware, software and methodologies used to achieve machine vision, but for now each real-world application will most likely be uniquely focused. The “what’s going on in this scene” test for intelligent vision demands a far greater awareness of context than anything we’ve seen thus far.

But, NeoFace is relevant to our discussion in another way. It’s the shape of solutions to come. It represents an emerging era where the number of transistors or the number of lines of code embodied in a solution convey little relevant information about how advanced or sophisticated the solution is. NeoFace definitely contains billions of transistors, but so does your average RAM array. NeoFace definitely embodies millions of lines of source code, but so do any number of dull, pedestrian software systems. No metric we have yet defines characterizes the sophistication of solutions that integrate the state-of-the-art in transistor density, compute acceleration, software development, machine learning, big data, and on and on.

As the number of transistors and the number of lines approach infinity, we definitely hit a vanishing point in their usefulness as metrics. Something else defines the sophistication of our solution. As applications start to develop and improve themselves as a result of their deployment history, even the number of hours of engineering investment in developing a system become almost moot. We lose the ability to measure what we’ve created in the usual ways.

This loss of metrics is frightening because it affects the nature of what we sell. IoT hasn’t just moved us from selling stand-alone systems to selling distributed networked systems. It fundamentally changes our value proposition. There really is no meaningful, sustainable way to sell IoT as “devices” or “systems”. The thing we are actually selling is “capability”. With NeoFace, NEC its selling the capability to recognize human faces. Yes, there are Xeon processors and FPGAs and software and boards and modules and memory and data – but none of that has any real meaning out of the context of the capability.

As system integrators, we will move more and more to stitching systems together by acquiring and integrating “capabilities” rather than components. And, capabilities-as-services may well be a dominant business model in the future. But, as we create technology that can perform increasingly sophisticated tasks previously reserved only for humans, and as we begin to endow that technology with super-human capabilities, it will become increasingly difficult to put a fence of ownership around a particular piece of hardware, software, data, or IP. The capability itself will have transcended decomposition, and therefore ownership. While the practice of engineering may not change, the economics and the product of it most certainly will.

Leave a Reply

featured blogs
Aug 18, 2017
After leaving the title of yesterday's post in German, the obvious place to go for the next stop on the summer of computer museums is Germany. If you ask people which was the first digital computer, the answer tends to vary with the respondent's nationality. Do comp...
Aug 16, 2017
With our last web update, we brought you a major update to our on-site search capabilities, an update to how we serve static webfiles on the website, and a few new content pages for industry standard products. In July, we continued to make improvements to our recently rele...
Aug 09, 2017
Recall for a moment a scene—you’ve probably witnessed something like it at trade shows or on video dozens of times. A robot arm moves purposefully from its rest position. It swoops down on an object—a sphere, say—plucks it up, holds it aloft, replaces it, ...
Aug 03, 2017
Speedcore eFPGAs are embeddable IP that include look-up tables, memories, and DSP building blocks, allowing designers to add a programmable logic fabric to their SoC. The Speedcore IP can be configured to any size as dictated by the end application. The SoC supplier defines t...