feature article
Subscribe Now

Watching the Weightless

Integration Transcends Moore’s Law

We engineers like things we can count and measure. Our professional comfort zone is in the metrics, multipliers, and deltas – hard data on which we can rest our reassurances that we’ve progressed – solved the world’s problems one bit at a time. Moore’s Law, of course, is a solid long-arc barometer for the engineering collective – a sequence of numbers on which we can hang our collective hats – carving notches in the stock as we trouble our way toward infinity. And, at some level it is the basis on which we’ve built our businesses, seeking remuneration for our engineering achievements.

Quietly, though, that framework has shifted. While our hypnotic stare has been affixed to the exponentially growing number of transistors, our currency has changed. The limit, it seems, as our principal metric approaches infinity is a new term – the result of a different function. For decades, the engineers at the foundation of the technology pyramid created components – integrated arrays of transistors that combined in new and interesting ways to provide higher-level functions, which in turn were arrayed to produce boards, devices, and (when overlaid with software) systems. These systems solved end-user problems.

As IoT has taken center stage, however, a different tide is moving us forward. No longer is the tally of transistors a valid measure of the technological milestone we’ve achieved. Integrated stacks of silicon and software embody a collective experience that has snowballed through the decades in a sum-is-exponentially-greater than the parts equation that defies any attempt at measurement.

Whoa! What does that even mean?

A press release crossed my desk recently from Intel, announcing that the company had collaborated with NEC in the creation of NEC’s NeoFace facial recognition engine. The release explains that NeoFace, on the hardware side, combines FPGA acceleration with conventional processing elements to deliver impressive performance in facial recognition benchmarks by the U.S. National Institute of Standards and Technology (NIST). The release says:

“The NIST tests evaluated the accuracy of the technology in two real-life test scenarios including a test for entry-exit management at an airport passenger gate. It determined whether and how well the engine could recognize people as they walked through an area one at a time without stopping or looking at the camera. NEC’s face recognition technology won first place with a matching accuracy of 99.2 percent. The error rate of 0.8 percent is less than one-fourth the second-place error rate.”

Obviously this is a tremendous technological achievement – identifying 99.2 percent of people in real time from a 4K video stream as they walk through a complex environment without stopping or pausing, and with diverse lighting and other environmental challenges is shockingly close to “magic”. The technology behind NeoFace is daunting. It packs extreme computational power provided by heterogeneous computing with FPGAs doing the heavy lifting as hardware accelerators for the recognition algorithms. It fuses machine learning, heterogeneous computing, big data, video processing, and a host of other esoteric technologies into a vision solution that achieves what seems to be the impossible. And, even though it might outperform a human in real-time identification of a random collection of people, it is managing only a tiny sliver of what true human vision can accomplish.

Put another way, NeoFace represents but one narrow slice of the rapidly exploding spectrum of embedded vision. As we work our way up the capability chain, we will undoubtedly see strong trends emerge in the hardware, software and methodologies used to achieve machine vision, but for now each real-world application will most likely be uniquely focused. The “what’s going on in this scene” test for intelligent vision demands a far greater awareness of context than anything we’ve seen thus far.

But, NeoFace is relevant to our discussion in another way. It’s the shape of solutions to come. It represents an emerging era where the number of transistors or the number of lines of code embodied in a solution convey little relevant information about how advanced or sophisticated the solution is. NeoFace definitely contains billions of transistors, but so does your average RAM array. NeoFace definitely embodies millions of lines of source code, but so do any number of dull, pedestrian software systems. No metric we have yet defines characterizes the sophistication of solutions that integrate the state-of-the-art in transistor density, compute acceleration, software development, machine learning, big data, and on and on.

As the number of transistors and the number of lines approach infinity, we definitely hit a vanishing point in their usefulness as metrics. Something else defines the sophistication of our solution. As applications start to develop and improve themselves as a result of their deployment history, even the number of hours of engineering investment in developing a system become almost moot. We lose the ability to measure what we’ve created in the usual ways.

This loss of metrics is frightening because it affects the nature of what we sell. IoT hasn’t just moved us from selling stand-alone systems to selling distributed networked systems. It fundamentally changes our value proposition. There really is no meaningful, sustainable way to sell IoT as “devices” or “systems”. The thing we are actually selling is “capability”. With NeoFace, NEC its selling the capability to recognize human faces. Yes, there are Xeon processors and FPGAs and software and boards and modules and memory and data – but none of that has any real meaning out of the context of the capability.

As system integrators, we will move more and more to stitching systems together by acquiring and integrating “capabilities” rather than components. And, capabilities-as-services may well be a dominant business model in the future. But, as we create technology that can perform increasingly sophisticated tasks previously reserved only for humans, and as we begin to endow that technology with super-human capabilities, it will become increasingly difficult to put a fence of ownership around a particular piece of hardware, software, data, or IP. The capability itself will have transcended decomposition, and therefore ownership. While the practice of engineering may not change, the economics and the product of it most certainly will.

Leave a Reply

featured blogs
Jun 18, 2021
It's a short week here at Cadence CFD as we celebrate the Juneteenth holiday today. But CFD doesn't take time off as evidenced by the latest round-up of CFD news. There are several really... [[ Click on the title to access the full blog on the Cadence Community sit...
Jun 17, 2021
Learn how cloud-based SoC design and functional verification systems such as ZeBu Cloud accelerate networking SoC readiness across both hardware & software. The post The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Clou...
Jun 17, 2021
In today’s blog episode, we would like to introduce our newest White Paper: “System and Component qualifications of VPX solutions, Create a novel, low-cost, easy to build, high reliability test platform for VPX modules“. Over the past year, Samtec has worked...
Jun 14, 2021
By John Ferguson, Omar ElSewefy, Nermeen Hossam, Basma Serry We're all fascinated by light. Light… The post Shining a light on silicon photonics verification appeared first on Design with Calibre....

featured video

Reduce Analog and Mixed-Signal Design Risk with a Unified Design and Simulation Solution

Sponsored by Cadence Design Systems

Learn how you can reduce your cost and risk with the Virtuoso and Spectre unified analog and mixed-signal design and simulation solution, offering accuracy, capacity, and high performance.

Click here for more information about Spectre FX Simulator

featured paper

What is a Hall-effect sensor?

Sponsored by Texas Instruments

Are you considering a Hall-effect sensor for your next design? Read this technical article to learn how Hall-effect sensors work to accurately measure position, distance and movement. In this article, you’ll gain insight into Hall-effect sensing theory, topologies, common use cases and the different types of Hall-effect sensors available today: Hall-effect switches, latches and linear sensors.

Click to read more

Featured Chalk Talk

Accelerate the Integration of Power Conversion with microBUCK® and microBRICK™

Sponsored by Mouser Electronics and Vishay

In the world of power conversion, multi-chip packaging, thermal performance, and power density can make all of the difference in the success of your next design. In this episode of Chalk Talk, Amelia Dalton chats with Raymond Jiang about the trends and challenges in power delivery and how you can leverage the unique combination of discrete MOSFET design, IC expertise, and packaging capability of Vishay’s microBRICK™and microBUCK® integrated voltage regulators.

Click here for more information about Vishay microBUCK® and microBRICK™ DC/DC Regulators