feature article
Subscribe Now

An AI Storm is Coming as Analog AI Surfaces in Sensors

I worry that when writing these columns, I sometimes start by meandering my way off into the weeds, cogitating and ruminating on “this and that” before eventually bringing the story back home. So, on the basis that “a change is as good as a rest,” as the old English proverb goes, let’s do things a little differently this time.

Take a look at the image below. What do you see in addition to the penny piece? What I see is a Mantis AI-in-Sensor (AIS) System-on-Chip (SoC), where the “AI” portion of this moniker stands for “artificial intelligence.” This little beauty is brought to us from those clever little scamps at AIStorm.ai who simply cannot restrain themselves from dropping the phrase “a storm is coming” into every conversation.

Mantis AIS SoC next to an American penny piece (Image source, AIStorm)

Of course, we’ve all seen chips before, so what makes this one different? Well, what you are looking at here is essentially a self-contained “AI Smart Camera” because all that is needed externally is a lens and a capacitor. Using an analog AI implementation, this little beauty draws only 15 µW of power in “always-on” operation. Furthermore, it’s so fast that it can have finished its processing before its digital AI competitors have gathered enough data to even think about commencing their processing, but mayhap we are getting ahead of ourselves…

Let’s start with the problem, which is CMOS imaging sensors and MEMS audio sensors generating analog data. For example, in a traditional image-processing implementation, the pixel array is connected to a source follower (that is, a field-effect transistor (FET)-based common-drain amplifier), which drives an analog-to-digital converter (ADC), which feeds an image signal processor (ISP). The output from the ISP is then passed over a digital communications channel, such as a MIPI SerDes link, to a digital AI. This analyzes the image or video stream in the digital domain — using a microcontroller unit (MCU), graphic processing unit (GPU), digital signal processor (DSP), or field-programmable gate array (FPGA) — and generates appropriate events. The fact that the system must digitize the data to feed these discrete processing engines results in higher latency, higher power consumption, and higher cost.

The problem (unintegrated digital AI) versus the solution (integrated analog AI) (Image source: AIStorm)

By comparison, AIStorm’s solution transforms the sensor into the input layer of an analog charge domain AI. The AIS device accepts the sensor data directly without digitizing, it uses the sensor charge to directly couple to the first layer of analog neurons, it uses multiple layers of analog neurons to perform tasks like weight multiplication, summing, and biasing, and — ultimately — it produces a decision output. The fact that the AIS feeds analog data from the sensor directly into the integrated analog AI results in lower latency, lower power consumption, and lower cost. In turn, this results in a significant increase in performance and longer life for battery-powered products.

Although AIStorm’s Mantis data flow employs charge domain processing and uses pulses to communicate between its analog neurons, it’s still based on a standard TensorFlow development methodology, including a bridge that allows the artificial neural network (ANN) generated by TensorFlow to be downloaded into the Mantis SoCs.

Mantis development flow (Image source: AIStorm)

In addition to being able to accept analog data directly from its image sensor or audio sensor, Mantis can also accept data via digital interfaces such as SPI, I2S, or PDM. Furthermore, sensor data can be output using the same digital interfaces for use in creating training datasets. The training itself is performed on a PC. The resulting weights and execution information are loaded into Mantis, which provides inference execution.  

Now, it has to be acknowledged that we aren’t talking about high-definition image processing here. Mantis currently supports a resolution of 96 x 96 pixels, but that’s more than enough to support a tremendous range of markets.

A selection of potential target Mantis markets (Image source: AIStorm)

Out of all these potential markets, the one that immediately caught my eye was “Occupancy” in the “Consumer IoT” column. Over the years, I have grown to hate occupancy systems that are based on passive infrared (PIR) detectors. On the one hand, it’s nice that the lights turn on automatically when you enter a room without your having to do anything, especially if your arms are full of books and papers and suchlike. On the other hand, it’s a pain in the nether regions when the lights go out while you are in the middle of reading or writing something, forcing you to leap to your feet and start gesticulating furiously (it’s also embarrassing if someone enters the room just after the lights have activated to find you jumping up and down waving your arms around while casting PIR-centric aspersions… or so I’ve been told).

Sensors are a system’s eyes, ears, nose, and fingers. Furthermore, sensors can focus on what’s important — a face, a sound, an intruder, a change — while ignoring anything outside their purview. You might say that AIStorm’s AIS technology is just a clever mix of analog and mixed-signal technology, but it’s much more than that. For the first time, a teeny-tiny sensor is smart enough to perform complex analysis, make decisions, and deal with events itself, often before its digital competitors have even been able to start processing.

AIS SoCs are the first and only sensor solutions capable of accepting pixel-charge data or audio-MEMs-charge data directly in its native charge form. The result is the world’s only family of solutions capable of image or audio-based smart AI wakeup on a person, face, object, behavior, sound, or word. 

Mantis AIS devices include convolutional neural network (CNN) and fully connected (FC) capabilities with the flexibility to implement a variety of popular machine learning (ML) models. The first member of the Mantis family, the C100A, is a fully integrated AI-based Smart Camera supporting up to eight layers of programmable deep learning capability while drawing only 15 µW of power in “always-on” operation. This AI system is a powerful but lightweight CNN that’s capable of performing image analysis and waking up on detecting an object, person, or behavior using supplied MantisNet Models.

The great thing here is that there’s room for everyone at the AI party. The folks at AIStorm aren’t trying to replace “traditional” high-end AI vision and speech applications, such as surveillance systems that can identify persons of interest out of a crowd or detect and respond to complex commands and queries like the Amazon Alexa (I put “traditional” in quotes because commercially deployed AI is still so new that it seems strange to refer to it as traditional). Rather, they are taking AI where it’s never been possible to deploy it before — at least not in a cost-effective fashion — to the extreme edge of the internet in the sensors themselves.

I for one am very interested to see where this technology takes us. How about you? Do you have any thoughts you’d care to share?

One thought on “An AI Storm is Coming as Analog AI Surfaces in Sensors”

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Using the Vishay IHLE® to Mitigate Radiated EMI
Sponsored by Mouser Electronics and Vishay
EMI mitigation is an important design concern for a lot of different electronic systems designs. In this episode of Chalk Talk, Amelia Dalton and Tim Shafer from Vishay explore how Vishay’s IHLE power inductors can reduce radiated EMI. They also examine how the composition of these inductors can support the mitigation of EMI and how you can get started using Vishay’s IHLE® High Current Inductors in your next design.
Dec 4, 2023
19,510 views