feature article
Subscribe Now

No Moore for MEMS

Sensors Stay Steady

On April 19, 1965, Electronics magazine ran an article called “Cramming More Components Onto Integrated Circuits.” It was written by an engineer from Fairchild Semiconductor, and it contained a simple prediction that turned out to be the trend that changed the world. Gordon Moore’s article is the reference point for the explosive growth in semiconductor capability that has lasted for almost fifty years now.

In that same year, there was another article in that same magazine describing a device invented by Harvey Nathanson of Westinghouse Labs that combined a tungsten rod over a transistor to form a “microscopic frequency selective device” – the very first MEMS device. The device was later patented as the “Resonant Gate Transistor.” 

So – MEMS and logic transistors have both been around for almost fifty years. And, since MEMS and logic transistors are fabricated in the same factories, using the same techniques, and used in the same systems, there is a natural temptation to draw correlations between them. Indeed, as I attended the annual MEMS Executive Congress last week, I had the distinct deja vu sense that I was back in 1980s semiconductor land. The tight-knit community of highly-motivated people exploring a vast universe of possibilities with an exciting emerging technology whose time has come – had all the ingredients of that Moore’s Law magic that captured our imaginations and transformed our culture back before semiconductor production became the exclusive purview of entities with the wealth of nations.

Everyone seems to be silently waiting in anticipation of the same thing. When will MEMS have a Moore’s-Law-like explosion that will catapult companies with Intel-like velocity from shaky startups to stalwart supercorporations? With MEMS in every mobile device, and predictions that the world will contain a trillion MEMS sensors within just a few years, the excitement is palpable. After all, a trillion is a very big number – it works out to between 300 and 400 sensors for every man, woman, and child on Earth.

There will be no Moore’s Law for MEMS.

While 300-400 MEMS devices for every human being in existence may sound like a lot, to paraphrase Douglas Adams, that’s just peanuts to transistors. With transistor counts in the latest process nodes running into the billions of transistors per device, there will be many individuals who own transistors in the trillions. And, while this comparison may seem silly, it does highlight an important fact: Moore’s Law was not about “electronics” or “components” in general. It was about one single type of device – the CMOS logic transistor.

Of course, lithography made quantum improvements over the decades and we can now make smaller, better versions of all kinds of components – including MEMS – as a result. But the component driving that explosion was the only one we knew how to use productively in almost unlimited quantities – the logic transistor. A smartphone or tablet today can put several billion logic transistors to work without missing a beat. If we offered smartphone designers a billion more for free, they’d take it. But it’s hard to figure out what we’d do with more than a few dozen MEMS sensors in a phone. With 9 motion sensors and a GPS, your phone already knows where it is, which way it’s oriented, and how it’s moving.

Doubling up on those sensors offers no practical value. We could throw in a few variometers, hygrometers, thermometers, barometers, heck – even a spectrometer or two – and our device would be a sensory bad-ass with only a double-digit MEMS tab. And, behind each one of those sensors we’d still need a massive number of transistors to do the requisite amount of processing required to make use of the data those sensors are dumping out. In fact, the irony of the situation is that the presence of MEMS in our systems is causing a renewed demand for much more of the non-MEMS technology – like FPGAs.

There is most certainly a MEMS-driven revolution occurring in our systems. And the proliferation of those sensors – which most likely will fulfill the “trillion sensor” forecasts being tossed around by MEMS industry experts – will absolutely transform the electronics landscape again, just not with a Moore’s Law explosion in MEMS itself.

Consider today’s primary technology driver, the smartphone. There is considerable speculation as to the utility of quad-core, 64-bit processors in smartphones. Why? There just hasn’t been that much processing to do. Once we had devices that could deliver outstanding video gaming performance, there weren’t many application mountains to climb that required giant, in-phone, heavy-iron processing power. And, those big ‘ol processors impose a power penalty that’s very hard to ignore in our incredibly tight battery budgets.

But throwing a passel of MEMS sensors into the mix brings on a whole new processing challenge. Now we need to perform sophisticated analyses on massive amounts of data coming from those sensors – often constantly and in real time – in order to achieve the end-goal for our system, which is referred to as “context.” 

“Context” is simply an understanding of what is going on, extrapolated from a pile of diverse data. Context usually involves answering a simple question reliably – what is the device (or the user of the device) doing right now, and in what environment? After a bunch of algorithms are applied to a crazy stream of data, our system may conclude that the user is now “walking.” Bonus points if it knows other details like where that walking is taking place, how fast the user is going, and what environment the user is walking through.

Making a system that can reliably infer context from cross-correlating a lot of sensor data requires a few good MEMS sensors – and a gigantic amount of ultra-low-power processing prowess. That challenge is one that won’t be addressed by more or better sensors. It is also likely one that won’t be able to get much benefit from that quad-core 64-bit ARM monstrosity. Just powering that thing up for more than a quick after-the-fact analysis breaks the power budget of most battery-powered systems – and pretty much every potentially wearable device.

Solving those processing challenges will most likely require hardware architectures similar to FPGAs – which are the only devices right now that can deliver the combination of ultra-high performance, on-the-fly algorithm reconfigurability, and super-low power consumption that are needed to tackle the sensor-data tsunami. In fact, at least two FPGA companies (QuickLogic and Lattice Semiconductor) have gone after this challenge specifically, producing programmable logic devices suitable for running complex sensor fusion algorithms in battery-operated systems with tight constraints on power, cost, and form factor.

But sensor fusion is just the tip of the proverbial iceberg. When there are a trillion sensors out there in the world deluging us with data, our only hope of being able to extract high-quality, real-world, actionable information is a meta-scale heterogeneous client-and-server computing system that spans the gamut from tiny, efficient, local sensor fusion devices to enormous cloud-based, big-data, server farm analysis. Each layer of that meta machine will need to correlate, consolidate, and reduce the data available to it, and then pass the results upstream for higher-level analysis.

So, even though those sensors won’t have a Moore’s Law of their own, they are likely to be the driving factor in a formidable category of applications that will fuel the need for the same-old Moore’s Law to continue for a few more cycles. 

Leave a Reply

featured blogs
Sep 16, 2021
I was quite happy with the static platform I'd created for my pseudo robot heads, and then some mad impetuous fool suggested servos. Oh no! Here we go again......
Sep 16, 2021
CadenceLIVE, Cadence's annual user conference, has been a great platform for Cadence technology users, developers, and industry experts to connect, share ideas and best practices solve design... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Sep 15, 2021
Learn how chiplets form the basis of multi-die HPC processor architectures, fueling modern HPC applications and scaling performance & power beyond Moore's Law. The post What's Driving the Demand for Chiplets? appeared first on From Silicon To Software....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

Product Update: Complete DesignWare 400G/800G Ethernet IP

Sponsored by Synopsys

In this video product experts describe how designers can maximize the performance of their high-performance computing, AI and networking SoCs with Synopsys' complete DesignWare Ethernet 400G/800G IP solution, including MAC, PCS and PHY.

Click here for more information

featured paper

Authenticate Automotive Endpoints for Genuine Parts

Sponsored by Maxim Integrated (now part of Analog Devices)

Learn how to implement the DS28E40 Deep Cover 1-Wire Authenticator in a system to provide authentication for endpoints such as optical cameras, headlamps, EV Batteries, occupancy sensors, steering wheels, and a myriad of other automotive applications.

Click to read more

featured chalk talk

Build, Deploy and Manage Your FPGA-based IoT Edge Applications

Sponsored by Mouser Electronics and Intel

Designing cloud-connected applications with FPGAs can be a daunting engineering challenge. But, new platforms promise to simplify the process and make cloud-connected IoT design easier than ever. In this episode of Chalk Talk, Amelia Dalton chats with Tak Ikushima of Intel about how a collaboration between Microsoft and Intel is pushing innovation forward with a new FPGA Cloud Connectivity Kit.

Click here for more information about Terasic Technologies FPGA Cloud Connectivity Kit