feature article
Subscribe to EE Journal Daily Newsletter

Measuring PPs for Science

EEMBC’s Newest Benchmark Grade Peripheral Power

“I was a peripheral visionary. I could see the future, but only way off to the side.” – Steven Wright

Persistent processor appraisers promulgate propitious program promoting peripheral parsimony.

Translation: EEMBC has a new benchmark that measures MCU power efficiency.

Once upon a time, estimating an MCU’s power consumption was dead simple: it was printed at the bottom of the datasheet. That was before the advent of umpty-dozen different power-saving modes, when a chip’s active power wasn’t much different from its quiescent power. An on-chip UART or timer, for example, consumed about the same amount of power whether you used it or not. That wasn’t always a good thing, but it sure did make power budgeting easier.

Now, an MCU’s power consumption can change radically from moment to moment, depending on what it’s doing. This is generally reckoned as Progress, and A Good Thing for Mankind, but it really does complicate the science of designing power supplies, estimating battery life, planning thermal management, and placing decoupling capacitors.

If any of those criteria are important to you – especially the battery-life thing – you’ll want to familiarize yourself with EEMBC’s ULPMark-PP. It documents the reality of time-variable power consumption caused by peripheral activity. Want to know how much a certain MCU’s current draw will fluctuate with PWM activity? ULPMark-PP has got your number.

But wait! Isn’t there already a ULPMark benchmark? Why yes, dear EEMBC enthusiast, there is, but the “base” ULPMark (called ULPMark-CP) measures processor performance, not peripheral activity. It’s more of a traditional benchmark for programmers and designers who are trying to narrow down the list of potential MCUs with the right power/performance ratio to get the job done. It, too, measures energy (i.e., the integration of power over time), but only whilst the MCU is doing compute-intensive tasks. Peripherals don’t enter into the ULPMark-CP equation.

That’s what sister test ULPMark-PP is for. Rather than exercising the processor core, PP tests up to four on-chip peripherals in various states of activity. Right now, ULPMark-PP understands analog-to-digital converters (ADC), pulse-width modulators (PWM), the SPI serial peripheral interface, and real-time clocks (RTC).

The test works by alternately activating and deactivating each peripheral in turn, and mixing them up in various combinations until it’s worked through ten different permutations. For example, it starts by asking the ADC to collect 64 samples at 1KHz while the PWM operates at a fixed 10% duty cycle and the RTC keeps track of time. (The SPI is inactive in this first phase.) Then, one second later, the PWM duty cycle is gradually ramped up to 20%. After that, the ADC sampling rate drops by a factor of 1000 to just 1 sample/second. And so on, working through various configurations for each peripheral, some increasing their activity level, some decreasing, and some inactive.

Throughout the ten exercises, instantaneous current can vary by a factor of ten, depending on how the chip is designed and how aggressively it sleeps when its various subcomponents aren’t needed. Most MCUs don’t need anywhere near a full second to perform their given tasks, so each phase of the test tends to see a strong spike in current draw at the beginning of the one-second mark, followed by a comparatively long period of relative quiet. This is a deliberate part of the design of ULPMark-PP, because it’s pretty typical of how low-power MCUs are actually programmed and deployed.

An MCU that’s able to operate its ADC while remaining in low-power mode (ST’s STM32L433 is one example) saves energy by doing analog conversion in its sleep. In contrast, other MCUs need to wake the entire device (or parts of it) to babysit the ADC a thousand times per second. Similarly, some MCUs have PWM outputs that can operate while the chip is comatose, so long as the PWM duty cycle doesn’t change. Changing the duty cycle, however, may trigger a wakeup call, increasing activity and power consumption. How, when, and where the MCU handles this situation depends on the MCU. The first two phases of the benchmark are designed to probe those sorts of differences.  

Apart from comparing one MCU to another, ULPMark-PP is also handy for comparing an MCU to itself. How does energy efficiency improve (if at all) when you lower the voltage from 3.3V to 1.8V? Let’s find out!

EEMBC tested a fistful of Ambiq, Microchip, Silicon Labs, STMicroelectronics, and Texas Instruments parts, and their results strayed far from the square-law curve. The smallest improvement was just 11% when shifting from 3.3V to 1.8V, while the largest was 92%. Bear in mind that these metrics represent an increase in ULPMark-PP scores, not a simple decrease in overall energy consumption. A 92% improvement doesn’t mean you’ve nearly eliminated all energy consumption and are running purely on good wishes and rainbows – only that you’ve nearly doubled the characteristics that ULPMark-PP can measure. Still, it’s good to know that dropping your supply voltage will (usually) render a significant decrease in overall power. And it’s even better to know which chips will deliver that benefit.

If you’re interested in running your own ULPMark-PP tests on your own hardware – and really, who isn’t? – you can buy the very same power-measurement test rig that the white-coated lab assistants at EEMBC use. The Energy Monitor 2.0 board will soon be available from “a major distributor” and it allows anyone with a modicum of electrical engineering skill to measure both static and dynamic current consumption with nanoamp accuracy and microsecond resolution.

EEMBC posts its ULPMark-PP scores right alongside its ULPMark-CP scores, under the assumption that most MCU tire-kickers will want to evaluate both at the same time. Not surprisingly, some chips do better on the processor benchmark than they do on the peripheral benchmark. Which do you choose? That depends on your requirements, vendor affinity, software ecosystem, price, availability, and phase of the moon, among other factors. But at least now you can make an informed decision, and that’s all any benchmark can hope to provide.

Leave a Reply

featured blogs
Oct 18, 2017
Rob Aitken is digging a bit deeper into what it would really take to connect a trillion things in his keynote next Thursday at Arm TechCon How to Build and Connect a Trillion Things . What would those things be? What might unit volumes be? How could we power them? Secure them...
Oct 18, 2017
As consumers, no one ever complains that their wireless connectivity is “too fast”. Global wireless carriers and network providers continue to push the limits of 4G LTE, but a next-generation wireless standard – 5G New Radio (5G NR) – is on the horizo...
Sep 12, 2017
Torrents of packets will cascade into the data center: endless streams of data from the Internet of Things (IoT), massive flows of cellular network traffic into virtualized network functions, bursts of input to Web applications. And hidden in the cascades, far darker bits try...
Sep 29, 2017
Our existing customers ask us some pretty big questions: “How can this technology implement a step-change in my specific process? How can Speedcore IP be integrated in my SoC? How can you increase the performance of my ASIC?” We revel in answering such questions. Ho...