feature article
Subscribe Now

An AI Storm is Coming as Analog AI Surfaces in Sensors

I worry that when writing these columns, I sometimes start by meandering my way off into the weeds, cogitating and ruminating on “this and that” before eventually bringing the story back home. So, on the basis that “a change is as good as a rest,” as the old English proverb goes, let’s do things a little differently this time.

Take a look at the image below. What do you see in addition to the penny piece? What I see is a Mantis AI-in-Sensor (AIS) System-on-Chip (SoC), where the “AI” portion of this moniker stands for “artificial intelligence.” This little beauty is brought to us from those clever little scamps at AIStorm.ai who simply cannot restrain themselves from dropping the phrase “a storm is coming” into every conversation.

Mantis AIS SoC next to an American penny piece (Image source, AIStorm)

Of course, we’ve all seen chips before, so what makes this one different? Well, what you are looking at here is essentially a self-contained “AI Smart Camera” because all that is needed externally is a lens and a capacitor. Using an analog AI implementation, this little beauty draws only 15 µW of power in “always-on” operation. Furthermore, it’s so fast that it can have finished its processing before its digital AI competitors have gathered enough data to even think about commencing their processing, but mayhap we are getting ahead of ourselves…

Let’s start with the problem, which is CMOS imaging sensors and MEMS audio sensors generating analog data. For example, in a traditional image-processing implementation, the pixel array is connected to a source follower (that is, a field-effect transistor (FET)-based common-drain amplifier), which drives an analog-to-digital converter (ADC), which feeds an image signal processor (ISP). The output from the ISP is then passed over a digital communications channel, such as a MIPI SerDes link, to a digital AI. This analyzes the image or video stream in the digital domain — using a microcontroller unit (MCU), graphic processing unit (GPU), digital signal processor (DSP), or field-programmable gate array (FPGA) — and generates appropriate events. The fact that the system must digitize the data to feed these discrete processing engines results in higher latency, higher power consumption, and higher cost.

The problem (unintegrated digital AI) versus the solution (integrated analog AI) (Image source: AIStorm)

By comparison, AIStorm’s solution transforms the sensor into the input layer of an analog charge domain AI. The AIS device accepts the sensor data directly without digitizing, it uses the sensor charge to directly couple to the first layer of analog neurons, it uses multiple layers of analog neurons to perform tasks like weight multiplication, summing, and biasing, and — ultimately — it produces a decision output. The fact that the AIS feeds analog data from the sensor directly into the integrated analog AI results in lower latency, lower power consumption, and lower cost. In turn, this results in a significant increase in performance and longer life for battery-powered products.

Although AIStorm’s Mantis data flow employs charge domain processing and uses pulses to communicate between its analog neurons, it’s still based on a standard TensorFlow development methodology, including a bridge that allows the artificial neural network (ANN) generated by TensorFlow to be downloaded into the Mantis SoCs.

Mantis development flow (Image source: AIStorm)

In addition to being able to accept analog data directly from its image sensor or audio sensor, Mantis can also accept data via digital interfaces such as SPI, I2S, or PDM. Furthermore, sensor data can be output using the same digital interfaces for use in creating training datasets. The training itself is performed on a PC. The resulting weights and execution information are loaded into Mantis, which provides inference execution.  

Now, it has to be acknowledged that we aren’t talking about high-definition image processing here. Mantis currently supports a resolution of 96 x 96 pixels, but that’s more than enough to support a tremendous range of markets.

A selection of potential target Mantis markets (Image source: AIStorm)

Out of all these potential markets, the one that immediately caught my eye was “Occupancy” in the “Consumer IoT” column. Over the years, I have grown to hate occupancy systems that are based on passive infrared (PIR) detectors. On the one hand, it’s nice that the lights turn on automatically when you enter a room without your having to do anything, especially if your arms are full of books and papers and suchlike. On the other hand, it’s a pain in the nether regions when the lights go out while you are in the middle of reading or writing something, forcing you to leap to your feet and start gesticulating furiously (it’s also embarrassing if someone enters the room just after the lights have activated to find you jumping up and down waving your arms around while casting PIR-centric aspersions… or so I’ve been told).

Sensors are a system’s eyes, ears, nose, and fingers. Furthermore, sensors can focus on what’s important — a face, a sound, an intruder, a change — while ignoring anything outside their purview. You might say that AIStorm’s AIS technology is just a clever mix of analog and mixed-signal technology, but it’s much more than that. For the first time, a teeny-tiny sensor is smart enough to perform complex analysis, make decisions, and deal with events itself, often before its digital competitors have even been able to start processing.

AIS SoCs are the first and only sensor solutions capable of accepting pixel-charge data or audio-MEMs-charge data directly in its native charge form. The result is the world’s only family of solutions capable of image or audio-based smart AI wakeup on a person, face, object, behavior, sound, or word. 

Mantis AIS devices include convolutional neural network (CNN) and fully connected (FC) capabilities with the flexibility to implement a variety of popular machine learning (ML) models. The first member of the Mantis family, the C100A, is a fully integrated AI-based Smart Camera supporting up to eight layers of programmable deep learning capability while drawing only 15 µW of power in “always-on” operation. This AI system is a powerful but lightweight CNN that’s capable of performing image analysis and waking up on detecting an object, person, or behavior using supplied MantisNet Models.

The great thing here is that there’s room for everyone at the AI party. The folks at AIStorm aren’t trying to replace “traditional” high-end AI vision and speech applications, such as surveillance systems that can identify persons of interest out of a crowd or detect and respond to complex commands and queries like the Amazon Alexa (I put “traditional” in quotes because commercially deployed AI is still so new that it seems strange to refer to it as traditional). Rather, they are taking AI where it’s never been possible to deploy it before — at least not in a cost-effective fashion — to the extreme edge of the internet in the sensors themselves.

I for one am very interested to see where this technology takes us. How about you? Do you have any thoughts you’d care to share?

Leave a Reply

featured blogs
Apr 19, 2021
Cache coherency is not a new concept. Coherent architectures have existed for many generations of CPU and Interconnect designs. Verifying adherence to coherency rules in SoCs has always been one of... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 19, 2021
Samtec blog readers are used to hearing about high-performance design. However, we see an increase in intertest in power integrity (PI). PI grows more crucial with each design iteration, yet many engineers are just starting to understand PI. That raises an interesting questio...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

featured chalk talk

Accelerating Physical Verification Productivity Part Two

Sponsored by Synopsys

Physical verification of IC designs at today’s advanced process nodes requires an immense amount of processing power. But, getting your design and verification tools to take full advantage of the compute resources available can be a challenge. In this episode of Chalk Talk, Amelia Dalton chats with Manoz Palaparthi of Synopsys about dramatically improving the performance of your physical verification process. 

Click here for more information about Physical Verification using IC Validator