feature article
Subscribe Now

Ambiq Apollo4 Undercuts IoT Power

Unique Subthreshold Voltage Drops MCU into Microwatt Range

“Whoever is new to power is always harsh.” — Aeschylus

Ambiq has a new microcontroller. Of all the MCUs you could pick for your next IoT design… this is one of them. 

If that sounds like damning with faint praise, I don’t mean it that way. It’s actually a very nice part, as we will see. It’s just that the market for low-cost MCUs is very crowded. It’s hard for one vendor – much less one specific chip – to stand out. There are only so many ways you can slice, dice, and fine-tune this fiercely competitive segment. 

First, the basics. Ambiq is a smallish (135-person) fabless chip company based in Austin. It’s been around for 10 years, so it’s hardly a startup, even if it’s staffed like one. The company’s charter (do companies still do that?) is to make super-low-power MCUs for battery-powered devices. 

That’s all very nice, but it also pretty much describes a dozen other IoT-MCU wannabes. What makes Ambiq so special? The company’s USP (unique selling proposition, for salespeople) is spelled SPOT, which stands for subthreshold power optimized technology. In short, Ambiq designs circuitry that switches at very low voltages, around 0.3V, which is well below the level that most digital circuits would consider to be “off.” Ambiq has been productizing its SPOT-based devices for years now, so the spooky technology is now well-characterized. And it makes for good MCUs. 

The new MCU is called Apollo4 and it is, not coincidentally, the 4th generation of the company’s Apollo family. This and all previous generations are based on ARM’s Cortex-M4F CPU design. M4F is a midrange microcontroller with a floating-point unit, and in Apollo4 it runs at up to 196 MHz. That’s about twice as fast as Apollo3, and about four times faster than Apollo2. 

Surrounding this familiar CPU is a plethora of peripherals: UARTs, SPI, I2C, ADC, timers, random-number generator (RNG), audio outputs, crypto accelerator, GPIO pins, and plenty more. The CPU itself is bordered by 64KB of instruction cache and 384 KB of either data cache or tightly coupled memory (TCM), a Cortex hallmark. It wouldn’t be a standalone MCU without on-chip memory, so Apollo4 has 2MB of MRAM (magnetoresistive RAM) and another 1 MB of SRAM. 

What makes Apollo4 interesting (aside from its low-power technology) are its display controller and its optional Bluetooth LE interface. The display resolution stretches only to 640×480, so it’s not videogame quality, but it does support 32-bit color. Creating a convincing and customer-pleasing display is more about color depth than resolution, so Apollo4 should be more than good enough for smart refrigerators, home security systems, health monitors, and other GUI-rich devices. 

The BLE baseband modem and RF stage are optional and are there primarily for location finding through the spiffy angle-of-attack/angle-of-departure (AoA/AoD) feature of BLE 5. You can also use it to connect to other smart devices (i.e., watches). What Apollo4 doesn’t offer is any other type of wireless interface. There’s no Wi-Fi, Thread, Zigbee, or other interfaces that seem like obvious candidates for an IoT-focused MCU. Ambiq says its choice of BLE or nothing allows designers to choose their own interface using off-chip logic, including Wi-Fi, et al., which would add cost to the chip and almost guarantee that some interfaces would go unused. A small company like Ambiq can’t afford to maintain too many different SKUs. 

So, how low-power is this new low-power device? That’s tough to measure, since it depends on so many variables. Ambiq says that Apollo4 consumes just 3 µA per MHz (10 µW/MHz at 3.3V), which works out to about 2mW at top speed. Sounds good. Ambiq’s previous generation Apollo3 consumed twice that much power, and the generation before that (Apollo2) ate up three times as much. So, even by Ambiq’s standards, Apollo4 is an extremely low-power device. 

Apollo4 is sampling now, so it’ll be a while before we see it in everyday devices. You’ll know how to find them. They’ll be the ones with the longest battery life. 

Leave a Reply

featured blogs
Oct 21, 2020
We'€™re concluding the Online Training Deep Dive blog series, which has been taking the top 15 Online Training courses among students and professors and breaking them down into their different... [[ Click on the title to access the full blog on the Cadence Community site. ...
Oct 20, 2020
In 2020, mobile traffic has skyrocketed everywhere as our planet battles a pandemic. Samtec.com saw nearly double the mobile traffic in the first two quarters than it normally sees. While these levels have dropped off from their peaks in the spring, they have not returned to ...
Oct 19, 2020
Have you ever wondered if there may another world hidden behind the facade of the one we know and love? If so, would you like to go there for a visit?...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

Featured Video

Four Ways to Improve Verification Performance and Throughput

Sponsored by Cadence Design Systems

Learn how to address your growing verification needs. Hear how Cadence Xcelium™ Logic Simulation improves your design’s performance and throughput: improving single-core engine performance, leveraging multi-core simulation, new features, and machine learning-optimized regression technology for up to 5X faster regressions.

Click here for more information about Xcelium Logic Simulation

featured paper

An engineer’s guide to autonomous and collaborative industrial robots

Sponsored by Texas Instruments

As robots are becoming more commonplace in factories, it is important that they become more intelligent, autonomous, safer and efficient. All of this is enabled with precise motor control, advanced sensing technologies and processing at the edge, all with robust real-time communication. In our e-book, an engineer’s guide to industrial robots, we take an in-depth look at the key technologies used in various robotic applications.

Click here to download the e-book

Featured Chalk Talk

Smart Embedded Vision with PolarFire FPGAs

Sponsored by Mouser Electronics and Microchip

In embedded vision applications, doing AI inference at the edge is often required in order to meet performance and latency demands. But, AI inference requires massive computing power, which can exceed our overall power budget. In this episode of Chalk Talk, Amelia Dalton talks to Avery Williams of Microchip about using FPGAs to get the machine vision performance you need, without blowing your power, form factor, and thermal requirements.

More information about Microsemi / Microchip PolarFire FPGA Video & Imaging Kit