feature article
Subscribe Now

Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge

One of my favorite science fiction authors is/was Isaac Asimov (should we use the past tense since he is no longer with us, or the present tense because we still enjoy his writings?). In many ways Asimov was a futurist, but — like all who attempt to foretell what is to come — he occasionally managed to miss the mark.

Take his classic Foundation Trilogy, for example (before he added the two prequels and two sequels). On the one hand we have a Galactic Empire that spans the Milky Way with millions of inhabited worlds and quadrillions of people. Also, we have mighty space vessels equipped with hyperdrives that can convey people from one side of the galaxy to the other while they are still young enough to enjoy the experience.

On the other hand, in Foundation and Empire, when a message arrives at a spaceship via hyperwave for the attention of General Bel Riose, it’s transcribed onto a metal spool that’s placed in a message capsule that will open only to his thumbprint. Asimov simply never conceived of things like today’s wireless networks and tablet computers and suchlike.

As an aside, I was completely blown away when I heard that this classic tale will soon be gracing our television screens (see OMG! Asimov’s Foundation is Coming to TV!). I just took another look at the Official Teaser Trailer and I, for one, cannot wait!

 

Something else for which Asimov is famed are his Three Laws of Robotics, which were introduced in his 1942 short story Runaround. Don’t worry, I’m not going to recite them here — if you are reading this column, it’s close to a 99.9999% certainty that you, like me, can recite them by heart. Asimov used these laws to great effect in his various robot stories, along with the concept of the robots having “positronic brains” (when he wrote his first robot stories in 1939 and 1940, the positron was a newly discovered particle, so the buzz word “positronic” added a soupcon of science to the proceedings). Although there weren’t too many nitty-gritty details provided, there was talk about “pathways” being formed in the brains, resulting in what we might today recognize as being a super-sophisticated form of analog artificial neural network (AANN) (a supersized version of Aspinity’s Awesome AANNs, if you will, and don’t get me started talking about Analog AI Surfacing in Sensors).

The Dartmouth workshop in 1956 is now widely considered to be the founding event of artificial intelligence as a field, prior to which Asimov would probably not have been aware of many artificial intelligence (AI) and machine learning (ML) concepts. Now I come to think about it, however, I find it telling that the only time what we would now refer to as “artificial intelligence” ever raised its head in Asimov’s writings was in the form of his robots and their positronic brains. As we now know, AI and ML can appear all over the place, from small microcontrollers at the extreme edge of the internet to humongous server farms a.k.a. the cloud. I would love to be able to get my time machine working and bring Asimov to the present day to show him all the stuff we have now, like Wi-Fi and smartphones and virtual reality (VR) and suchlike. I would also love to introduce him to the current state of play regarding AI and ML.

As I’ve mentioned before (and as I’ll doubtless mention again), following the Dartmouth workshop, AI/ML was largely an academic pursuit until circa 2015 when it exploded onto the scene. The past few years have seen amazing growth in AI/ML sophistication and deployment to the extent that it’s becoming increasingly difficult to find something that doesn’t boast some form of AI/ML. (Some people think that the recent news that Google is Using AI to Design its Next Generation of AI Chips More Quickly Than Humans Can signifies “the beginning of the end,” but I remain confident that it’s only “the end of the beginning.”)

One of the things that characterized AI/ML in the “early days” — which, for me, would be about six years ago as I pen these words — was how difficult it all used to be. When these technologies first stuck their metaphorical noses out of the lab, they required data scientists with size-16 brains (the ones with “go-faster” stripes on the sides) to train them to perform useful tasks. The problem is that data scientists are thin on the ground. Also, when you are an embedded systems designer, the last thing you want to do is to spend your life trying to explain to a data scientist what it is you want to do, if you see what I mean (it’s like the old software developer’s joke: “In order to understand recursion, you must first understand recursion”).

Happily, companies are now popping up like mushrooms with new technologies to make things much, much easier for the rest of us. For example, I was recently chatting with the folks at SensiML (pronounced “sense-ee-mel” to rhyme with “sensible”), whose mission it is to help embedded systems designers create AI/ML-equipped systems that run at the edge. SensiML’s role in all of this is to provide the developers with accurate AI/ML sensor algorithms that can run on the smallest IoT devices, along with the tools to make the magic happen.

Consider the SensiML Endpoint AI/ML Workflow as depicted below. The SensiML analytics toolkit suite automates each step of the process for creating and validating optimized AI/ML IoT sensor code. The overall workflow uses a growing library of advanced AL/ML algorithms to generate code that can learn from new data either during the development phase or once deployed.

The SensiML Endpoint AI/ML Workflow (Image source: SensiML)

We start with the Data Capture Lab, which is a fully-fledged, time-series sensor data collection and labeling tool. As the folks at SensiML say, “Collecting and labeling train/test data represents the single greatest development expense and source of differentiation in AI/ML model development. It is also one of the most overlooked and error-prone aspects of building ML models.” One thing I really like about this tool is how you can combine the data being collected with a video of events taking place. Suppose you are trying to train for gesture recognition, for example. Having the time-synchronized video makes it easy for you to identify to the system where each gesture starts and ends.

Next, we have the Analytics Studio, which “uses your labeled datasets to rapidly generate efficient inference models using AutoML and an extensive library of edge-optimized features and classifiers. Using cloud-based model search, Analytics Studio can transform your labeled raw data into high performance edge algorithms in minutes or hours, not weeks or months as with hand-coding. Analytics Studio uses AutoML to tackle the complexities of machine learning algorithm pre-processing, selection, and tuning without reliance on an expert to define and configure these countless options manually.”

The final step is to use TestApp to validate the accuracy of the model in real-time on the intended IoT device. As the chaps and chapesses at SensiML say, “The time gap between model simulation and working IoT device can take weeks or months with traditional design methods. With SensiML’s AutoML workflow culminating with on-device testing using TestApp, developers can get to a working prototype in mere days or weeks.”

There is a cornucopia of content on SensiML’s YouTube channel that will keep you busy for hours, including some that address strangely compelling topics like Cough Detection — Labeling Events. Another good all-rounder is the Predictive Maintenance Fan Demo.

 

Unfortunately, I fear there is far too much to all of this to cover here. If you are interested in learning more, I would strongly suggest that you take the time to peruse and ponder all there is to see on SensiML’s website, after which you can follow up with the YouTube videos. As usual, of course, I welcome your comments and questions.

One thought on “Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge”

Leave a Reply

featured blogs
Jul 24, 2021
Many modern humans have 2% Neanderthal DNA in our genomes. The combination of these DNA snippets is like having the ghost of a Neanderthal in our midst....
Jul 23, 2021
The Team RF "μWaveRiders" blog series is a showcase for Cadence AWR RF products. Monthly topics will vary between Cadence AWR Design Environment release highlights, feature videos, Cadence... [[ Click on the title to access the full blog on the Cadence Community...
Jul 23, 2021
Synopsys co-CEO Aart de Geus explains how AI has become an important chip design tool as semiconductor companies continue to innovate in the SysMoore Era. The post Entering the SysMoore Era: Synopsys Co-CEO Aart de Geus on the Need for AI-Designed Chips appeared first on Fro...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Adopt a Shift-left Methodology to Accelerate Your Product Development Process

Sponsored by Cadence Design Systems

Validate your most sophisticated SoC designs before silicon and stay on schedule. Balance your workload between simulation, emulation and prototyping for complete system validation. You need the right tool for the right job. Emulation meets prototyping -- Cadence Palladium and Protium Dynamic Duo for IP/SoC verification, hardware and software regressions, and early software development.

More information about Emulation and Prototyping

featured paper

Configure the charge and discharge current separately in a reversible buck/boost regulator

Sponsored by Maxim Integrated

The design of a front-end converter can be made less complicated when minimal extra current overhead is required for charging the supercapacitor. This application note explains how to configure the reversible buck/boost converter to achieve a lighter impact on the system during the charging phase. Setting the charge current requirement to the minimum amount keeps the discharge current availability intact.

Click to read more

featured chalk talk

Medical Device Security

Sponsored by Siemens Digital Industries Software

In the new era of connected medical devices, securing embedded systems has become more important than ever. But, there is a lot medical device designers can borrow from current best-practices for embedded security in general. In this episode of Chalk Talk, Amelia Dalton chats with Robert Bates from Mentor about strategies and challenges for securing modern medical devices and systems.

Click here to download the whitepaper, "Medical Device Security: Achieving Regulatory Approval"