feature article
Subscribe Now

No Moore for MEMS

Sensors Stay Steady

On April 19, 1965, Electronics magazine ran an article called “Cramming More Components Onto Integrated Circuits.” It was written by an engineer from Fairchild Semiconductor, and it contained a simple prediction that turned out to be the trend that changed the world. Gordon Moore’s article is the reference point for the explosive growth in semiconductor capability that has lasted for almost fifty years now.

In that same year, there was another article in that same magazine describing a device invented by Harvey Nathanson of Westinghouse Labs that combined a tungsten rod over a transistor to form a “microscopic frequency selective device” – the very first MEMS device. The device was later patented as the “Resonant Gate Transistor.” 

So – MEMS and logic transistors have both been around for almost fifty years. And, since MEMS and logic transistors are fabricated in the same factories, using the same techniques, and used in the same systems, there is a natural temptation to draw correlations between them. Indeed, as I attended the annual MEMS Executive Congress last week, I had the distinct deja vu sense that I was back in 1980s semiconductor land. The tight-knit community of highly-motivated people exploring a vast universe of possibilities with an exciting emerging technology whose time has come – had all the ingredients of that Moore’s Law magic that captured our imaginations and transformed our culture back before semiconductor production became the exclusive purview of entities with the wealth of nations.

Everyone seems to be silently waiting in anticipation of the same thing. When will MEMS have a Moore’s-Law-like explosion that will catapult companies with Intel-like velocity from shaky startups to stalwart supercorporations? With MEMS in every mobile device, and predictions that the world will contain a trillion MEMS sensors within just a few years, the excitement is palpable. After all, a trillion is a very big number – it works out to between 300 and 400 sensors for every man, woman, and child on Earth.

There will be no Moore’s Law for MEMS.

While 300-400 MEMS devices for every human being in existence may sound like a lot, to paraphrase Douglas Adams, that’s just peanuts to transistors. With transistor counts in the latest process nodes running into the billions of transistors per device, there will be many individuals who own transistors in the trillions. And, while this comparison may seem silly, it does highlight an important fact: Moore’s Law was not about “electronics” or “components” in general. It was about one single type of device – the CMOS logic transistor.

Of course, lithography made quantum improvements over the decades and we can now make smaller, better versions of all kinds of components – including MEMS – as a result. But the component driving that explosion was the only one we knew how to use productively in almost unlimited quantities – the logic transistor. A smartphone or tablet today can put several billion logic transistors to work without missing a beat. If we offered smartphone designers a billion more for free, they’d take it. But it’s hard to figure out what we’d do with more than a few dozen MEMS sensors in a phone. With 9 motion sensors and a GPS, your phone already knows where it is, which way it’s oriented, and how it’s moving.

Doubling up on those sensors offers no practical value. We could throw in a few variometers, hygrometers, thermometers, barometers, heck – even a spectrometer or two – and our device would be a sensory bad-ass with only a double-digit MEMS tab. And, behind each one of those sensors we’d still need a massive number of transistors to do the requisite amount of processing required to make use of the data those sensors are dumping out. In fact, the irony of the situation is that the presence of MEMS in our systems is causing a renewed demand for much more of the non-MEMS technology – like FPGAs.

There is most certainly a MEMS-driven revolution occurring in our systems. And the proliferation of those sensors – which most likely will fulfill the “trillion sensor” forecasts being tossed around by MEMS industry experts – will absolutely transform the electronics landscape again, just not with a Moore’s Law explosion in MEMS itself.

Consider today’s primary technology driver, the smartphone. There is considerable speculation as to the utility of quad-core, 64-bit processors in smartphones. Why? There just hasn’t been that much processing to do. Once we had devices that could deliver outstanding video gaming performance, there weren’t many application mountains to climb that required giant, in-phone, heavy-iron processing power. And, those big ‘ol processors impose a power penalty that’s very hard to ignore in our incredibly tight battery budgets.

But throwing a passel of MEMS sensors into the mix brings on a whole new processing challenge. Now we need to perform sophisticated analyses on massive amounts of data coming from those sensors – often constantly and in real time – in order to achieve the end-goal for our system, which is referred to as “context.” 

“Context” is simply an understanding of what is going on, extrapolated from a pile of diverse data. Context usually involves answering a simple question reliably – what is the device (or the user of the device) doing right now, and in what environment? After a bunch of algorithms are applied to a crazy stream of data, our system may conclude that the user is now “walking.” Bonus points if it knows other details like where that walking is taking place, how fast the user is going, and what environment the user is walking through.

Making a system that can reliably infer context from cross-correlating a lot of sensor data requires a few good MEMS sensors – and a gigantic amount of ultra-low-power processing prowess. That challenge is one that won’t be addressed by more or better sensors. It is also likely one that won’t be able to get much benefit from that quad-core 64-bit ARM monstrosity. Just powering that thing up for more than a quick after-the-fact analysis breaks the power budget of most battery-powered systems – and pretty much every potentially wearable device.

Solving those processing challenges will most likely require hardware architectures similar to FPGAs – which are the only devices right now that can deliver the combination of ultra-high performance, on-the-fly algorithm reconfigurability, and super-low power consumption that are needed to tackle the sensor-data tsunami. In fact, at least two FPGA companies (QuickLogic and Lattice Semiconductor) have gone after this challenge specifically, producing programmable logic devices suitable for running complex sensor fusion algorithms in battery-operated systems with tight constraints on power, cost, and form factor.

But sensor fusion is just the tip of the proverbial iceberg. When there are a trillion sensors out there in the world deluging us with data, our only hope of being able to extract high-quality, real-world, actionable information is a meta-scale heterogeneous client-and-server computing system that spans the gamut from tiny, efficient, local sensor fusion devices to enormous cloud-based, big-data, server farm analysis. Each layer of that meta machine will need to correlate, consolidate, and reduce the data available to it, and then pass the results upstream for higher-level analysis.

So, even though those sensors won’t have a Moore’s Law of their own, they are likely to be the driving factor in a formidable category of applications that will fuel the need for the same-old Moore’s Law to continue for a few more cycles. 

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Intel AI Update
Sponsored by Mouser Electronics and Intel
In this episode of Chalk Talk, Amelia Dalton and Peter Tea from Intel explore how Intel is making AI implementation easier than ever before. They examine the typical workflows involved in artificial intelligence designs, the benefits that Intel’s scalable Xeon processor brings to AI projects, and how you can take advantage of the Intel AI ecosystem to further innovation in your next design.
Oct 6, 2023
6,884 views