feature article
Subscribe Now

Measuring PPs for Science

EEMBC’s Newest Benchmark Grade Peripheral Power

“I was a peripheral visionary. I could see the future, but only way off to the side.” – Steven Wright

Persistent processor appraisers promulgate propitious program promoting peripheral parsimony.

Translation: EEMBC has a new benchmark that measures MCU power efficiency.

Once upon a time, estimating an MCU’s power consumption was dead simple: it was printed at the bottom of the datasheet. That was before the advent of umpty-dozen different power-saving modes, when a chip’s active power wasn’t much different from its quiescent power. An on-chip UART or timer, for example, consumed about the same amount of power whether you used it or not. That wasn’t always a good thing, but it sure did make power budgeting easier.

Now, an MCU’s power consumption can change radically from moment to moment, depending on what it’s doing. This is generally reckoned as Progress, and A Good Thing for Mankind, but it really does complicate the science of designing power supplies, estimating battery life, planning thermal management, and placing decoupling capacitors.

If any of those criteria are important to you – especially the battery-life thing – you’ll want to familiarize yourself with EEMBC’s ULPMark-PP. It documents the reality of time-variable power consumption caused by peripheral activity. Want to know how much a certain MCU’s current draw will fluctuate with PWM activity? ULPMark-PP has got your number.

But wait! Isn’t there already a ULPMark benchmark? Why yes, dear EEMBC enthusiast, there is, but the “base” ULPMark (called ULPMark-CP) measures processor performance, not peripheral activity. It’s more of a traditional benchmark for programmers and designers who are trying to narrow down the list of potential MCUs with the right power/performance ratio to get the job done. It, too, measures energy (i.e., the integration of power over time), but only whilst the MCU is doing compute-intensive tasks. Peripherals don’t enter into the ULPMark-CP equation.

That’s what sister test ULPMark-PP is for. Rather than exercising the processor core, PP tests up to four on-chip peripherals in various states of activity. Right now, ULPMark-PP understands analog-to-digital converters (ADC), pulse-width modulators (PWM), the SPI serial peripheral interface, and real-time clocks (RTC).

The test works by alternately activating and deactivating each peripheral in turn, and mixing them up in various combinations until it’s worked through ten different permutations. For example, it starts by asking the ADC to collect 64 samples at 1KHz while the PWM operates at a fixed 10% duty cycle and the RTC keeps track of time. (The SPI is inactive in this first phase.) Then, one second later, the PWM duty cycle is gradually ramped up to 20%. After that, the ADC sampling rate drops by a factor of 1000 to just 1 sample/second. And so on, working through various configurations for each peripheral, some increasing their activity level, some decreasing, and some inactive.

Throughout the ten exercises, instantaneous current can vary by a factor of ten, depending on how the chip is designed and how aggressively it sleeps when its various subcomponents aren’t needed. Most MCUs don’t need anywhere near a full second to perform their given tasks, so each phase of the test tends to see a strong spike in current draw at the beginning of the one-second mark, followed by a comparatively long period of relative quiet. This is a deliberate part of the design of ULPMark-PP, because it’s pretty typical of how low-power MCUs are actually programmed and deployed.

An MCU that’s able to operate its ADC while remaining in low-power mode (ST’s STM32L433 is one example) saves energy by doing analog conversion in its sleep. In contrast, other MCUs need to wake the entire device (or parts of it) to babysit the ADC a thousand times per second. Similarly, some MCUs have PWM outputs that can operate while the chip is comatose, so long as the PWM duty cycle doesn’t change. Changing the duty cycle, however, may trigger a wakeup call, increasing activity and power consumption. How, when, and where the MCU handles this situation depends on the MCU. The first two phases of the benchmark are designed to probe those sorts of differences.  

Apart from comparing one MCU to another, ULPMark-PP is also handy for comparing an MCU to itself. How does energy efficiency improve (if at all) when you lower the voltage from 3.3V to 1.8V? Let’s find out!

EEMBC tested a fistful of Ambiq, Microchip, Silicon Labs, STMicroelectronics, and Texas Instruments parts, and their results strayed far from the square-law curve. The smallest improvement was just 11% when shifting from 3.3V to 1.8V, while the largest was 92%. Bear in mind that these metrics represent an increase in ULPMark-PP scores, not a simple decrease in overall energy consumption. A 92% improvement doesn’t mean you’ve nearly eliminated all energy consumption and are running purely on good wishes and rainbows – only that you’ve nearly doubled the characteristics that ULPMark-PP can measure. Still, it’s good to know that dropping your supply voltage will (usually) render a significant decrease in overall power. And it’s even better to know which chips will deliver that benefit.

If you’re interested in running your own ULPMark-PP tests on your own hardware – and really, who isn’t? – you can buy the very same power-measurement test rig that the white-coated lab assistants at EEMBC use. The Energy Monitor 2.0 board will soon be available from “a major distributor” and it allows anyone with a modicum of electrical engineering skill to measure both static and dynamic current consumption with nanoamp accuracy and microsecond resolution.

EEMBC posts its ULPMark-PP scores right alongside its ULPMark-CP scores, under the assumption that most MCU tire-kickers will want to evaluate both at the same time. Not surprisingly, some chips do better on the processor benchmark than they do on the peripheral benchmark. Which do you choose? That depends on your requirements, vendor affinity, software ecosystem, price, availability, and phase of the moon, among other factors. But at least now you can make an informed decision, and that’s all any benchmark can hope to provide.

Leave a Reply

featured blogs
Aug 13, 2020
General Omar Bradley famously said: '€œAmateurs talk strategy. Professionals talk logistics.'€ And Napoleon (perhaps) said "An army marches on its stomach". That's not to underestimate... [[ Click on the title to access the full blog on the Cadence Commun...
Aug 12, 2020
Samtec has been selling its products online since the early 2000s, the very early days of eCommerce. We’ve been through a couple of shopping cart iterations since then. Before this recent upgrade, Samtec.com had been running on a cart system that was built in 2011. It w...
Aug 11, 2020
Making a person appear to say or do something they did not actually say or do has the potential to take the war of disinformation to a whole new level....
Aug 7, 2020
[From the last episode: We looked at activation and what they'€™re for.] We'€™ve talked about the structure of machine-learning (ML) models and much of the hardware and math needed to do ML work. But there are some practical considerations that mean we may not directly us...

Featured Paper

True 3D EM Modeling Enables Fast, Accurate Analysis

Sponsored by Cadence Design Systems

Tired of patchwork 3D EM analysis? Impedance discontinuity can destroy your BER and cause multiple design iterations. Using today’s 3D EM modeling tools can take you days to accurately model the interconnect structures. The Clarity™ 3D Solver lets you tackle the most complex EM challenges when designing systems for 5G, high-performance computing, automotive and machine learning applications. The Clarity 3D Solver delivers gold-standard accuracy, 10X faster analysis speeds and virtually unlimited capacity for true 3D modeling of critical interconnects in PCB, IC package and system-on-IC (SoIC) designs.

Click here for more information

Featured Paper

Computational Software: 4 Ways It is Transforming System Design & Hardware Design

Sponsored by BestTech Views

Cadence President Anirudh Devgan shares his detailed insights on Computational Software. Anirudh provides a clear definition of computational software, and four specific ways computational software is transforming system design & hardware design -- including highly distributed compute, reduced memory footprints, co-optimization, and machine learning applications.

Click here for the white paper.

Featured Chalk Talk

Innovative Hybrid Crowbar Protection for AC Power Lines

Sponsored by Mouser Electronics and Littelfuse

Providing robust AC line protection is a tough engineering challenge. Lightning and other unexpected events can wreak havoc with even the best-engineered power supplies. In this episode of Chalk Talk, Amelia Dalton chats with Pete Pytlik of Littelfuse about innovative SIDACtor semiconductor hybrid crowbar protection for AC power lines, that combine the best of TVS and MOV technologies to deliver superior low clamping voltage for power lines.

More information about Littelfuse SIDACtor + MOV AC Line Protection