feature article
Subscribe Now

“Swimming in Sensors, Drowning in Data”

Mil-Aero Challenges Go Mainstream

As is often the case, the system design challenges faced by the defense industry were harbingers of issues to come for the rest of us. In 2010 Lt. Gen. David A. Deptula, Air Force deputy chief of staff for intelligence, was quoted as saying, “We’re going to find ourselves, in the not too distant future, swimming in sensors and drowning in data.” This was less than one year after Kevin Ashton, executive director of the Auto-ID Center, reportedly coined the term, “Internet of Things.” Many of those “things,” it turns out, are sensors of various types, pumping out massive amounts of raw data – from which we can hopefully learn… something.

The past few years have seen a nested exponential explosion of sensor data. The number of active sensors in the world has been forecast to go as high as one trillion within the next decade (yes, we realize that’s about a dozen for every living human on Earth), and, during that same time period, the amount of data dumped out by each sensor is trending sharply upwards. This is producing a tsunami of data that our systems and software are ill prepared to handle – something that we engineers should think of as “job security.” 

The trick is, of course, turning all that data into useful, actionable information. 

This data deluge has caused us to rethink every aspect of the structure of our computing architecture. The process of divining information from data involves carefully and thoughtfully distributing the computing load between the edge of the network, right where the sensors are spitting out the data, through the vaguely-defined “fog” and “mist” metaphors, and up to the heavy-iron “cloud” resources in expansive data centers. Then, taking the resulting information and passing it back down the chain to the level at which it can do some good.

The intelligent distribution of the computation load is critical. The farther out toward the edge we can push data-reducing computations, the more we lighten the load on the upstream parts of the system. A security camera that can send the message, “two people are currently trying to break into the south entrance,” is far more efficient than one that sends hundreds of hours of HD video upstream, depending on human or computer resources somewhere else to extract the useful insight in a timely manner. By building more intelligence into the edge of the network, we reduce the loads on our data pipes and our servers and lower our latency.

Much of the most valuable information, however, requires the aggregation of data from multiple sources. This pushes some of the most useful and critical computations one click back from the edge, to a point where multi-sensor data can be cross-correlated to extract context. Rather than raw feeds from multiple inertial, gyro, and magnetic sensors, we prefer to have a simple description of the motion of the instrumented object. Rather than separate video feeds from multiple cameras, we’d rather see a 3D map of the visual space. Instead of Lidar, video, inertial, and other massive streams of bits, we’d prefer our autonomous car to simply cruise along without striking anything. 

The complex challenges presented by this distribution of computing duties also demand a break from the traditional von Neumann computing architecture. While some parts of every system lend themselves to conventional processor architectures, many problems are better served by FPGAs, GPUs, or other novel architectures that parallelize and distribute the computing load, accelerating the algorithms while reducing the total power required. Edge computation is typically heavily power-constrained – often limited by battery life, or harvested energy, or other modest energy sources. So, while mobile quad-core 64-bit ARM-based application processors are technically feasible, the available energy often limits us to more efficient alternative processing architectures right-sized for the task at hand. 

In addition to the computational complexity, bandwidth, power, and latency challenges, our new distributed heterogeneous computing systems must also pack unprecedented levels of security, reliability, and robustness. The enormous quantity of important information flowing through the public airwaves presents a vast green field for bad actors to test their craft, and the complexity of these distributed architectures puts a strain on our tried-and-true methods for securing our systems.

The most exciting benefits from this new machine, however, are likely to come from the new powers of observation these systems may bestow upon us. In medicine, for example, we are almost certain to find new correlations between observable data and the onset of dangerous conditions. Monitoring millions of patients and correlating the collected data with diagnoses and outcomes, we should begin to learn new “early warning” signs that could save lives and make treatments less costly and more effective. In just about every industry, one could come up with scenarios where insightful analysis of sensor data could deliver not just new information, but new knowledge about how things work and interrelate.

Finding these patterns in data, of course, is the domain of “big data” analysis and rapidly emerging AI and neural network technology. Rather than locking our systems into canonical algorithms, we can give them the power to intelligently observe and adapt in ways that human programmers could not foresee. Here again, though, conventional computing hardware is giving way to alternative architectures such as GPUs and FPGAs for doing training and inference efficiently. 

IoT presents perhaps the largest cross-domain engineering trend we have seen in decades. The amount of collaboration and innovation across multiple disciplines: semiconductor processes, hardware architectures, networking, communications, software, mechanical and MEMS, optical, design automation – and multiple verticals: data center, mobile, consumer, industrial, medical, military, and on and on.  Just about every press release, product announcement, and PowerPoint deck we have seen over the past year has had some explanation of how the company, product, or technology helps to enable IoT. 

This rising tide of sensor data will most definitely overwhelm us, but the faster we can learn to tread water and build systems that can extract useful information from these zetabytes of zeros and ones, the sooner we’ll be able to surf the immense power of our overwrought aquatic IoT metaphor. Uh, meaning, this IoT stuff is about to get interesting.

 

 

Leave a Reply

featured blogs
Apr 9, 2021
You probably already know what ISO 26262 is. If you don't, then you can find out in several previous posts: "The Safest Train Is One that Never Leaves the Station" History of ISO 26262... [[ Click on the title to access the full blog on the Cadence Community s...
Apr 8, 2021
We all know the widespread havoc that Covid-19 wreaked in 2020. While the electronics industry in general, and connectors in particular, took an initial hit, the industry rebounded in the second half of 2020 and is rolling into 2021. Travel came to an almost stand-still in 20...
Apr 7, 2021
We explore how EDA tools enable hyper-convergent IC designs, supporting the PPA and yield targets required by advanced 3DICs and SoCs used in AI and HPC. The post Why Hyper-Convergent Chip Designs Call for a New Approach to Circuit Simulation appeared first on From Silicon T...
Apr 5, 2021
Back in November 2019, just a few short months before we all began an enforced… The post Collaboration and innovation thrive on diversity appeared first on Design with Calibre....

featured video

Meeting Cloud Data Bandwidth Requirements with HPC IP

Sponsored by Synopsys

As people continue to work remotely, demands on cloud data centers have never been higher. Chip designers for high-performance computing (HPC) SoCs are looking to new and innovative IP to meet their bandwidth, capacity, and security needs.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

uPOL Technology

Sponsored by Mouser Electronics and TDK

Power modules are a superior solution for many system designs. Their small form factor, high efficiency, ease of design-in, and solid reliability make them a great solution in a wide range of applications. In this episode of Chalk Talk, Amelia Dalton chats with Tony Ochoa of TDK about the new uPOL family of power modules and how they can deliver the power in your next design.

Click here for more information about TDK FS1406 µPOL™ DC-DC Power Modules