feature article
Subscribe Now

More Benchmarks from EEMBC

The IoT Gets Some Attention

Everyone loves a hype cycle – at least until the disenchantment phase kicks in. Nothing spoils a party like someone demanding more than just words on a brochure. I mean, after all, anyone should be able to call their widget the highest-performance, lowest-power thing ever, right? Requiring actual evidence is such a downer.

And if you’re talking about the Internet of Things (IoT), then… well, where do you even start? You’ve got edge nodes – the gadgetry containing sensors and/or actuators; you’ve got local computation in the gateway (maybe); you’ve got the cloud; you’ve got the networks connecting everything; you’ve got security and privacy – or not – and all those things have to work right for everything to, well, work right.

At one level, you’ve got actual customers – residential or industrial or automotive or agricultural or medical or whatever – trying to figure out what works best. At another level, you have designers specifically trying to make their widget be the best. That means they need to know which components are going to give them the speed/power/cost profile that they need in order to make that end customer happy.

Well, there’s an app for that. We call them benchmarks. Otherwise known as calling everyone out on their claims. EEMBC announced an effort to address issues of interest to makers of anything IoT back in 2015, starting with what was then called ULPBench – a benchmark focused on testing the power on ultra-low-power (that’s the ULP bit) MCUs.

Over the last several months, new progress and new efforts have been announced; we’ll review some of them here.

ULPBench – er – let’s make that ULPMark – 2.0

The next wave of ultra-low power measurement is nearing completion. But, consistent with a number of EEMBC’s benchmarks having names ending in “Mark,” they’ve renamed ULPBench to ULPMark.

The first version of this benchmark focused on the computing part of the MCU; the next looks at some of the critical peripheral circuits, in particular, the:

  • Analog-digital converter (ADC)
  • Serial Peripheral Interface (SPI – this is also a proxy for I2C)
  • Pulse-width modulation (PWM)
  • Real-time clock (RTC)

The thing is, it’s not like there are four benchmark suites, one for each of these. More in the spirit of Design of Experiments, there are ten one-second “activity slots” that run in succession. Each has a different combination of settings. Everything sleeps when not used.

For example, slot 5 (picking at random) has:

  • the ADC take one sample with minimum sampling time of 20 µs
  • PWM having
    • a clock running at 32,786 Hz
    • a period of 255 clocks
    • 100 pulses
    • 50% duty cycle
    • fixed mode
  • SPI, which
    • checks that it received data from the previous slot
    • transmits the 128-bit buffer from the current slot (which will be duly checked in the next slot)
  • RTC, which was setup with a timer start in the first slot and just keeps running until the last slot, where the elapsed time is checked.

Formal launch of this benchmark is imminent.

IoTMark-BLE

Next is an effort to characterize various radio modules, starting with Bluetooth Low Energy (BLE). This is conceptually different from ULPMark, since MCUs can be designed intentionally to be high performance or low power, so the ones that don’t do well with low power may have other applications at which they excel.

Radios, by contrast, can probably never be characterized as ultra-low power, and, given a protocol and a profile, it’s always the goal to get power as low as practical. A higher-power radio with the same functionality and range would provide no obvious other benefit. So the IoTMark-BLE benchmark looks at the energy cost of BLE on different MCUs. Even though the radio is the key part here, the energy measured is the total sensor/MCU/radio energy (although it is possible to tease them apart by focusing on specific work tasks).

In order for this benchmark to make sense, we should probably pause and talk about the board on which this is done. It’s referred to as the IoTConnect Framework, and it has four elements:

  • The MCU (the DUT)
  • An energy monitor that both supplies and measures energy to the DUT
  • An I/O manager that connects via wire to a laptop or other server
  • A radio manager controlled by the server that communicates wirelessly with the DUT.

(Image courtesy EEMBC)

An easier-to-see photo of the actual board looks something like the following. Once a “flat” board, it has now been split into three modules: one for the I/O Manager and Energy Monitor, one for the Radio Manager, and one for the DUT itself. What you can’t see is all the wiring on the backside of the board.

(Click to enlarge. Image courtesy EEMBC)

Given this, then, the benchmark process goes as follows:

  • The DUT wakes up every 1000 ms (= 1 s) and reads 1K bytes from the I/O Manager over I2 It performs a low-pass filter operation on the data and then sends the filtered data to the Radio Manager. At this point it goes back to sleep. Note that the data isn’t actually transmitted wirelessly; rather, it’s queued by the MCU stack for transmission when the radio wakes up, and a notification is readied.
  • The Radio Manager also awakens every 1000 ms, but not necessarily in synch with the DUT. It sends a write command to the DUT. This is queued by the RM’s BLE stack.
  • At the next BLE connection interval, the queued DUT data is sent to the RM and the RM write command is sent to the DUT.
  • The DUT then wakes up to read the data, performs a CRC, and goes back to sleep.

As different MCUs perform this exercise, the Energy Manager can report on the energy consumed in the process.

Looking Beyond

The first obvious extension of what we’ve just seen will be the move to go beyond BLE. BLE has proven popular for many apps and IoT configurations, but the following are also under consideration:

  • WiFi HaLow (802.11ah)
  • Thread (6LoWPAN over 802.15.4)
  • LoRa
  • ZigBee

Completely separately, another working group is developing a security-related benchmark called SecureMark. The idea here is to measure the execution efficiency of various cryptographic protocols across MCU platforms. This is a work in progress, targeted for launch in the Q1 2018 timeframe (although with these kinds of efforts, the future is very hard to predict, especially ahead of time).

 

More info:

EEMBC

 

One thought on “More Benchmarks from EEMBC”

Leave a Reply

featured blogs
Oct 4, 2022
We share 6 key advantages of cloud-based IC hardware design tools, including enhanced scalability, security, and access to AI-enabled EDA tools. The post 6 Reasons to Leverage IC Hardware Development in the Cloud appeared first on From Silicon To Software....
Oct 4, 2022
Anyone designing a data center faces complex thermal management challenges . Yes, there's a large amount of electrical power required, but the other side of that coin is that almost all the power gets turned into heat, putting a tremendous strain on the airflow and cooling sy...
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Machine-Learning Optimized Chip Design -- Cadence Design Systems

Sponsored by Cadence Design Systems

New applications and technology are driving demand for even more compute and functionality in the devices we use every day. System on chip (SoC) designs are quickly migrating to new process nodes, and rapidly growing in size and complexity. In this episode of Chalk Talk, Amelia Dalton chats with Rod Metcalfe about how machine learning combined with distributed computing offers new capabilities to automate and scale RTL to GDS chip implementation flows, enabling design teams to support more, and increasingly complex, SoC projects.

Click here for more information about Cerebrus Intelligent Chip Explorer