feature article
Subscribe Now

An AI Storm is Coming as Analog AI Surfaces in Sensors

I worry that when writing these columns, I sometimes start by meandering my way off into the weeds, cogitating and ruminating on “this and that” before eventually bringing the story back home. So, on the basis that “a change is as good as a rest,” as the old English proverb goes, let’s do things a little differently this time.

Take a look at the image below. What do you see in addition to the penny piece? What I see is a Mantis AI-in-Sensor (AIS) System-on-Chip (SoC), where the “AI” portion of this moniker stands for “artificial intelligence.” This little beauty is brought to us from those clever little scamps at AIStorm.ai who simply cannot restrain themselves from dropping the phrase “a storm is coming” into every conversation.

Mantis AIS SoC next to an American penny piece (Image source, AIStorm)

Of course, we’ve all seen chips before, so what makes this one different? Well, what you are looking at here is essentially a self-contained “AI Smart Camera” because all that is needed externally is a lens and a capacitor. Using an analog AI implementation, this little beauty draws only 15 µW of power in “always-on” operation. Furthermore, it’s so fast that it can have finished its processing before its digital AI competitors have gathered enough data to even think about commencing their processing, but mayhap we are getting ahead of ourselves…

Let’s start with the problem, which is CMOS imaging sensors and MEMS audio sensors generating analog data. For example, in a traditional image-processing implementation, the pixel array is connected to a source follower (that is, a field-effect transistor (FET)-based common-drain amplifier), which drives an analog-to-digital converter (ADC), which feeds an image signal processor (ISP). The output from the ISP is then passed over a digital communications channel, such as a MIPI SerDes link, to a digital AI. This analyzes the image or video stream in the digital domain — using a microcontroller unit (MCU), graphic processing unit (GPU), digital signal processor (DSP), or field-programmable gate array (FPGA) — and generates appropriate events. The fact that the system must digitize the data to feed these discrete processing engines results in higher latency, higher power consumption, and higher cost.

The problem (unintegrated digital AI) versus the solution (integrated analog AI) (Image source: AIStorm)

By comparison, AIStorm’s solution transforms the sensor into the input layer of an analog charge domain AI. The AIS device accepts the sensor data directly without digitizing, it uses the sensor charge to directly couple to the first layer of analog neurons, it uses multiple layers of analog neurons to perform tasks like weight multiplication, summing, and biasing, and — ultimately — it produces a decision output. The fact that the AIS feeds analog data from the sensor directly into the integrated analog AI results in lower latency, lower power consumption, and lower cost. In turn, this results in a significant increase in performance and longer life for battery-powered products.

Although AIStorm’s Mantis data flow employs charge domain processing and uses pulses to communicate between its analog neurons, it’s still based on a standard TensorFlow development methodology, including a bridge that allows the artificial neural network (ANN) generated by TensorFlow to be downloaded into the Mantis SoCs.

Mantis development flow (Image source: AIStorm)

In addition to being able to accept analog data directly from its image sensor or audio sensor, Mantis can also accept data via digital interfaces such as SPI, I2S, or PDM. Furthermore, sensor data can be output using the same digital interfaces for use in creating training datasets. The training itself is performed on a PC. The resulting weights and execution information are loaded into Mantis, which provides inference execution.  

Now, it has to be acknowledged that we aren’t talking about high-definition image processing here. Mantis currently supports a resolution of 96 x 96 pixels, but that’s more than enough to support a tremendous range of markets.

A selection of potential target Mantis markets (Image source: AIStorm)

Out of all these potential markets, the one that immediately caught my eye was “Occupancy” in the “Consumer IoT” column. Over the years, I have grown to hate occupancy systems that are based on passive infrared (PIR) detectors. On the one hand, it’s nice that the lights turn on automatically when you enter a room without your having to do anything, especially if your arms are full of books and papers and suchlike. On the other hand, it’s a pain in the nether regions when the lights go out while you are in the middle of reading or writing something, forcing you to leap to your feet and start gesticulating furiously (it’s also embarrassing if someone enters the room just after the lights have activated to find you jumping up and down waving your arms around while casting PIR-centric aspersions… or so I’ve been told).

Sensors are a system’s eyes, ears, nose, and fingers. Furthermore, sensors can focus on what’s important — a face, a sound, an intruder, a change — while ignoring anything outside their purview. You might say that AIStorm’s AIS technology is just a clever mix of analog and mixed-signal technology, but it’s much more than that. For the first time, a teeny-tiny sensor is smart enough to perform complex analysis, make decisions, and deal with events itself, often before its digital competitors have even been able to start processing.

AIS SoCs are the first and only sensor solutions capable of accepting pixel-charge data or audio-MEMs-charge data directly in its native charge form. The result is the world’s only family of solutions capable of image or audio-based smart AI wakeup on a person, face, object, behavior, sound, or word. 

Mantis AIS devices include convolutional neural network (CNN) and fully connected (FC) capabilities with the flexibility to implement a variety of popular machine learning (ML) models. The first member of the Mantis family, the C100A, is a fully integrated AI-based Smart Camera supporting up to eight layers of programmable deep learning capability while drawing only 15 µW of power in “always-on” operation. This AI system is a powerful but lightweight CNN that’s capable of performing image analysis and waking up on detecting an object, person, or behavior using supplied MantisNet Models.

The great thing here is that there’s room for everyone at the AI party. The folks at AIStorm aren’t trying to replace “traditional” high-end AI vision and speech applications, such as surveillance systems that can identify persons of interest out of a crowd or detect and respond to complex commands and queries like the Amazon Alexa (I put “traditional” in quotes because commercially deployed AI is still so new that it seems strange to refer to it as traditional). Rather, they are taking AI where it’s never been possible to deploy it before — at least not in a cost-effective fashion — to the extreme edge of the internet in the sensors themselves.

I for one am very interested to see where this technology takes us. How about you? Do you have any thoughts you’d care to share?

Leave a Reply

featured blogs
Jan 15, 2021
I recently saw (what appears at first glance to be) a simple puzzle involving triangles. But is finding the solution going to be trickier than I think?...
Jan 15, 2021
It's Martin Luther King Day on Monday. Cadence is off. Breakfast Bytes will not appear. And, as is traditional, I go completely off-topic the day before a break. In the past, a lot of novelty in... [[ Click on the title to access the full blog on the Cadence Community s...
Jan 14, 2021
Learn how electronic design automation (EDA) tools & silicon-proven IP enable today's most influential smart tech, including ADAS, 5G, IoT, and Cloud services. The post 5 Key Innovations that Are Making Everything Smarter appeared first on From Silicon To Software....
Jan 13, 2021
Testing is the final step of any manufacturing process, and arguably the most important, and yet it can often be overlooked.  Releasing a poorly tested product onto the market has destroyed more than one reputation for quality, and this is even more important in an age when ...

featured paper

Common Design Pitfalls When Designing With Hall 2D Sensors And How To Avoid Them

Sponsored by Texas Instruments

This article discusses three widespread application issues in industrial and automotive end equipment – rotary encoding, in-plane magnetic sensing, and safety-critical – that can be solved more efficiently using devices with new features and higher performance. We will discuss in which end products these applications can be found and also provide a comparison with our traditional digital Hall-effect sensors showing how the new releases complement our existing portfolio.

Click here to download the whitepaper

featured chalk talk

The Wireless Member of the DARWIN Family

Sponsored by Mouser Electronics and Maxim Integrated

MCUs continue to evolve based on increasing demands from designers. We expect our microcontrollers to do more than ever - better security, more performance, lower power consumption - and we want it all for less money, of course. In this episode of Chalk Talk, Amelia Dalton chats with Kris Ardis from Maxim Integrated about the new DARWIN line of low-power MCUs.

Click here for more information about Maxim Integrated MAX32665-MAX32668 UB Class Microcontroller