feature article
Subscribe Now

Measuring PPs for Science

EEMBC’s Newest Benchmark Grade Peripheral Power

“I was a peripheral visionary. I could see the future, but only way off to the side.” – Steven Wright

Persistent processor appraisers promulgate propitious program promoting peripheral parsimony.

Translation: EEMBC has a new benchmark that measures MCU power efficiency.

Once upon a time, estimating an MCU’s power consumption was dead simple: it was printed at the bottom of the datasheet. That was before the advent of umpty-dozen different power-saving modes, when a chip’s active power wasn’t much different from its quiescent power. An on-chip UART or timer, for example, consumed about the same amount of power whether you used it or not. That wasn’t always a good thing, but it sure did make power budgeting easier.

Now, an MCU’s power consumption can change radically from moment to moment, depending on what it’s doing. This is generally reckoned as Progress, and A Good Thing for Mankind, but it really does complicate the science of designing power supplies, estimating battery life, planning thermal management, and placing decoupling capacitors.

If any of those criteria are important to you – especially the battery-life thing – you’ll want to familiarize yourself with EEMBC’s ULPMark-PP. It documents the reality of time-variable power consumption caused by peripheral activity. Want to know how much a certain MCU’s current draw will fluctuate with PWM activity? ULPMark-PP has got your number.

But wait! Isn’t there already a ULPMark benchmark? Why yes, dear EEMBC enthusiast, there is, but the “base” ULPMark (called ULPMark-CP) measures processor performance, not peripheral activity. It’s more of a traditional benchmark for programmers and designers who are trying to narrow down the list of potential MCUs with the right power/performance ratio to get the job done. It, too, measures energy (i.e., the integration of power over time), but only whilst the MCU is doing compute-intensive tasks. Peripherals don’t enter into the ULPMark-CP equation.

That’s what sister test ULPMark-PP is for. Rather than exercising the processor core, PP tests up to four on-chip peripherals in various states of activity. Right now, ULPMark-PP understands analog-to-digital converters (ADC), pulse-width modulators (PWM), the SPI serial peripheral interface, and real-time clocks (RTC).

The test works by alternately activating and deactivating each peripheral in turn, and mixing them up in various combinations until it’s worked through ten different permutations. For example, it starts by asking the ADC to collect 64 samples at 1KHz while the PWM operates at a fixed 10% duty cycle and the RTC keeps track of time. (The SPI is inactive in this first phase.) Then, one second later, the PWM duty cycle is gradually ramped up to 20%. After that, the ADC sampling rate drops by a factor of 1000 to just 1 sample/second. And so on, working through various configurations for each peripheral, some increasing their activity level, some decreasing, and some inactive.

Throughout the ten exercises, instantaneous current can vary by a factor of ten, depending on how the chip is designed and how aggressively it sleeps when its various subcomponents aren’t needed. Most MCUs don’t need anywhere near a full second to perform their given tasks, so each phase of the test tends to see a strong spike in current draw at the beginning of the one-second mark, followed by a comparatively long period of relative quiet. This is a deliberate part of the design of ULPMark-PP, because it’s pretty typical of how low-power MCUs are actually programmed and deployed.

An MCU that’s able to operate its ADC while remaining in low-power mode (ST’s STM32L433 is one example) saves energy by doing analog conversion in its sleep. In contrast, other MCUs need to wake the entire device (or parts of it) to babysit the ADC a thousand times per second. Similarly, some MCUs have PWM outputs that can operate while the chip is comatose, so long as the PWM duty cycle doesn’t change. Changing the duty cycle, however, may trigger a wakeup call, increasing activity and power consumption. How, when, and where the MCU handles this situation depends on the MCU. The first two phases of the benchmark are designed to probe those sorts of differences.  

Apart from comparing one MCU to another, ULPMark-PP is also handy for comparing an MCU to itself. How does energy efficiency improve (if at all) when you lower the voltage from 3.3V to 1.8V? Let’s find out!

EEMBC tested a fistful of Ambiq, Microchip, Silicon Labs, STMicroelectronics, and Texas Instruments parts, and their results strayed far from the square-law curve. The smallest improvement was just 11% when shifting from 3.3V to 1.8V, while the largest was 92%. Bear in mind that these metrics represent an increase in ULPMark-PP scores, not a simple decrease in overall energy consumption. A 92% improvement doesn’t mean you’ve nearly eliminated all energy consumption and are running purely on good wishes and rainbows – only that you’ve nearly doubled the characteristics that ULPMark-PP can measure. Still, it’s good to know that dropping your supply voltage will (usually) render a significant decrease in overall power. And it’s even better to know which chips will deliver that benefit.

If you’re interested in running your own ULPMark-PP tests on your own hardware – and really, who isn’t? – you can buy the very same power-measurement test rig that the white-coated lab assistants at EEMBC use. The Energy Monitor 2.0 board will soon be available from “a major distributor” and it allows anyone with a modicum of electrical engineering skill to measure both static and dynamic current consumption with nanoamp accuracy and microsecond resolution.

EEMBC posts its ULPMark-PP scores right alongside its ULPMark-CP scores, under the assumption that most MCU tire-kickers will want to evaluate both at the same time. Not surprisingly, some chips do better on the processor benchmark than they do on the peripheral benchmark. Which do you choose? That depends on your requirements, vendor affinity, software ecosystem, price, availability, and phase of the moon, among other factors. But at least now you can make an informed decision, and that’s all any benchmark can hope to provide.

Leave a Reply

featured blogs
Oct 19, 2020
Have you ever wondered if there may another world hidden behind the facade of the one we know and love? If so, would you like to go there for a visit?...
Oct 19, 2020
Sometimes, you attend an event and it feels like you are present at the start of a new era that will change some aspect of the technology industry. Of course, things don't change overnight. One... [[ Click on the title to access the full blog on the Cadence Community si...
Oct 16, 2020
Another event popular in the tech event circuit is PCI-SIG® DevCon. While DevCon events are usually in-person around the globe, this year, like so many others events, PCI-SIG DevCon is going virtual. PCI-SIG DevCons are members-driven events that provide an opportunity to le...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

See Inuitive’s NU4000 3D imaging and vision processor in action. The SoC supports high-quality 3D depth processor engine, SLAM accelerators, computer vision, and deep learning by integrating Synopsys ARC EV processor. In this demo, the NU4000 demonstrates simultaneous 3D sensing, SLAM and CNN functionality by mapping out its environment and localizing the sensor while identifying the objects within it. For more information, visit inuitive-tech.com.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured Paper

New package technology improves EMI and thermal performance with smaller solution size

Sponsored by Texas Instruments

Power supply designers have a new tool in their effort to achieve balance between efficiency, size, and thermal performance with DC/DC power modules. The Enhanced HotRod™ QFN package technology from Texas Instruments enables engineers to address design challenges with an easy-to-use footprint that resembles a standard QFN. This new package type combines the advantages of flip-chip-on-lead with the improved thermal performance presented by a large thermal die attach pad (DAP).

Click here to download the whitepaper

Featured Chalk Talk

Mom, I Have a Digital Twin? Now You Tell Me?

Sponsored by Cadence Design Systems

Today, one engineer’s “system” is another engineer’s “component.” The complexity of system-level design has skyrocketed with the new wave of intelligent systems. In this world, optimizing electronic system designs requires digital twins, shifting left, virtual platforms, and emulation to sort everything out. In this episode of Chalk Talk, Amelia Dalton chats with Frank Schirrmeister of Cadence Design Systems about system-level optimization.

Click here for more information