feature article
Subscribe Now

“Swimming in Sensors, Drowning in Data”

Mil-Aero Challenges Go Mainstream

As is often the case, the system design challenges faced by the defense industry were harbingers of issues to come for the rest of us. In 2010 Lt. Gen. David A. Deptula, Air Force deputy chief of staff for intelligence, was quoted as saying, “We’re going to find ourselves, in the not too distant future, swimming in sensors and drowning in data.” This was less than one year after Kevin Ashton, executive director of the Auto-ID Center, reportedly coined the term, “Internet of Things.” Many of those “things,” it turns out, are sensors of various types, pumping out massive amounts of raw data – from which we can hopefully learn… something.

The past few years have seen a nested exponential explosion of sensor data. The number of active sensors in the world has been forecast to go as high as one trillion within the next decade (yes, we realize that’s about a dozen for every living human on Earth), and, during that same time period, the amount of data dumped out by each sensor is trending sharply upwards. This is producing a tsunami of data that our systems and software are ill prepared to handle – something that we engineers should think of as “job security.” 

The trick is, of course, turning all that data into useful, actionable information. 

This data deluge has caused us to rethink every aspect of the structure of our computing architecture. The process of divining information from data involves carefully and thoughtfully distributing the computing load between the edge of the network, right where the sensors are spitting out the data, through the vaguely-defined “fog” and “mist” metaphors, and up to the heavy-iron “cloud” resources in expansive data centers. Then, taking the resulting information and passing it back down the chain to the level at which it can do some good.

The intelligent distribution of the computation load is critical. The farther out toward the edge we can push data-reducing computations, the more we lighten the load on the upstream parts of the system. A security camera that can send the message, “two people are currently trying to break into the south entrance,” is far more efficient than one that sends hundreds of hours of HD video upstream, depending on human or computer resources somewhere else to extract the useful insight in a timely manner. By building more intelligence into the edge of the network, we reduce the loads on our data pipes and our servers and lower our latency.

Much of the most valuable information, however, requires the aggregation of data from multiple sources. This pushes some of the most useful and critical computations one click back from the edge, to a point where multi-sensor data can be cross-correlated to extract context. Rather than raw feeds from multiple inertial, gyro, and magnetic sensors, we prefer to have a simple description of the motion of the instrumented object. Rather than separate video feeds from multiple cameras, we’d rather see a 3D map of the visual space. Instead of Lidar, video, inertial, and other massive streams of bits, we’d prefer our autonomous car to simply cruise along without striking anything. 

The complex challenges presented by this distribution of computing duties also demand a break from the traditional von Neumann computing architecture. While some parts of every system lend themselves to conventional processor architectures, many problems are better served by FPGAs, GPUs, or other novel architectures that parallelize and distribute the computing load, accelerating the algorithms while reducing the total power required. Edge computation is typically heavily power-constrained – often limited by battery life, or harvested energy, or other modest energy sources. So, while mobile quad-core 64-bit ARM-based application processors are technically feasible, the available energy often limits us to more efficient alternative processing architectures right-sized for the task at hand. 

In addition to the computational complexity, bandwidth, power, and latency challenges, our new distributed heterogeneous computing systems must also pack unprecedented levels of security, reliability, and robustness. The enormous quantity of important information flowing through the public airwaves presents a vast green field for bad actors to test their craft, and the complexity of these distributed architectures puts a strain on our tried-and-true methods for securing our systems.

The most exciting benefits from this new machine, however, are likely to come from the new powers of observation these systems may bestow upon us. In medicine, for example, we are almost certain to find new correlations between observable data and the onset of dangerous conditions. Monitoring millions of patients and correlating the collected data with diagnoses and outcomes, we should begin to learn new “early warning” signs that could save lives and make treatments less costly and more effective. In just about every industry, one could come up with scenarios where insightful analysis of sensor data could deliver not just new information, but new knowledge about how things work and interrelate.

Finding these patterns in data, of course, is the domain of “big data” analysis and rapidly emerging AI and neural network technology. Rather than locking our systems into canonical algorithms, we can give them the power to intelligently observe and adapt in ways that human programmers could not foresee. Here again, though, conventional computing hardware is giving way to alternative architectures such as GPUs and FPGAs for doing training and inference efficiently. 

IoT presents perhaps the largest cross-domain engineering trend we have seen in decades. The amount of collaboration and innovation across multiple disciplines: semiconductor processes, hardware architectures, networking, communications, software, mechanical and MEMS, optical, design automation – and multiple verticals: data center, mobile, consumer, industrial, medical, military, and on and on.  Just about every press release, product announcement, and PowerPoint deck we have seen over the past year has had some explanation of how the company, product, or technology helps to enable IoT. 

This rising tide of sensor data will most definitely overwhelm us, but the faster we can learn to tread water and build systems that can extract useful information from these zetabytes of zeros and ones, the sooner we’ll be able to surf the immense power of our overwrought aquatic IoT metaphor. Uh, meaning, this IoT stuff is about to get interesting.

 

 

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

EdgeQ Creates Big Connections with a Small Chip

Sponsored by Cadence Design Systems

Find out how EdgeQ delivered the world’s first 5G base station on a chip using Cadence’s logic simulation, digital implementation, timing and power signoff, synthesis, and physical verification signoff tools.

Click here for more information

featured paper

Intel Agilex FPGAs Deliver Game-Changing Flexibility & Agility for the Data-Centric World

Sponsored by Intel

The new Intel® Agilex™ FPGA is more than the latest programmable logic offering—it brings together revolutionary innovation in multiple areas of Intel technology leadership to create new opportunities to derive value and meaning from this transformation from edge to data center. Want to know more? Start with this white paper.

Click to read more

featured chalk talk

Twinax Flyover Systems for Next Gen Speeds

Sponsored by Samtec

As the demand for higher and higher speed connectivity increases, we need to look at our interconnect solutions to help solve the design requirements inherent with these kinds of designs. In this episode of Chalk Talk, Amelia Dalton and Matthew Burns from Samtec discuss how Samtec’s Flyover technology is helping solve our high speed connectivity needs. They take closer look at how Samtec’s Flyover technology helps solve the issue with PCB reach, the details of FLYOVER® QSFP SYSTEM, and how this cost effective, high–performance and heat efficient can help you with the challenges of your 56 Gbps bandwidths and beyond design.

Click here for more information about Twinax Flyover® Systems for Next Gen Speeds