feature article
Subscribe Now

Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge

One of my favorite science fiction authors is/was Isaac Asimov (should we use the past tense since he is no longer with us, or the present tense because we still enjoy his writings?). In many ways Asimov was a futurist, but — like all who attempt to foretell what is to come — he occasionally managed to miss the mark.

Take his classic Foundation Trilogy, for example (before he added the two prequels and two sequels). On the one hand we have a Galactic Empire that spans the Milky Way with millions of inhabited worlds and quadrillions of people. Also, we have mighty space vessels equipped with hyperdrives that can convey people from one side of the galaxy to the other while they are still young enough to enjoy the experience.

On the other hand, in Foundation and Empire, when a message arrives at a spaceship via hyperwave for the attention of General Bel Riose, it’s transcribed onto a metal spool that’s placed in a message capsule that will open only to his thumbprint. Asimov simply never conceived of things like today’s wireless networks and tablet computers and suchlike.

As an aside, I was completely blown away when I heard that this classic tale will soon be gracing our television screens (see OMG! Asimov’s Foundation is Coming to TV!). I just took another look at the Official Teaser Trailer and I, for one, cannot wait!

 

Something else for which Asimov is famed are his Three Laws of Robotics, which were introduced in his 1942 short story Runaround. Don’t worry, I’m not going to recite them here — if you are reading this column, it’s close to a 99.9999% certainty that you, like me, can recite them by heart. Asimov used these laws to great effect in his various robot stories, along with the concept of the robots having “positronic brains” (when he wrote his first robot stories in 1939 and 1940, the positron was a newly discovered particle, so the buzz word “positronic” added a soupcon of science to the proceedings). Although there weren’t too many nitty-gritty details provided, there was talk about “pathways” being formed in the brains, resulting in what we might today recognize as being a super-sophisticated form of analog artificial neural network (AANN) (a supersized version of Aspinity’s Awesome AANNs, if you will, and don’t get me started talking about Analog AI Surfacing in Sensors).

The Dartmouth workshop in 1956 is now widely considered to be the founding event of artificial intelligence as a field, prior to which Asimov would probably not have been aware of many artificial intelligence (AI) and machine learning (ML) concepts. Now I come to think about it, however, I find it telling that the only time what we would now refer to as “artificial intelligence” ever raised its head in Asimov’s writings was in the form of his robots and their positronic brains. As we now know, AI and ML can appear all over the place, from small microcontrollers at the extreme edge of the internet to humongous server farms a.k.a. the cloud. I would love to be able to get my time machine working and bring Asimov to the present day to show him all the stuff we have now, like Wi-Fi and smartphones and virtual reality (VR) and suchlike. I would also love to introduce him to the current state of play regarding AI and ML.

As I’ve mentioned before (and as I’ll doubtless mention again), following the Dartmouth workshop, AI/ML was largely an academic pursuit until circa 2015 when it exploded onto the scene. The past few years have seen amazing growth in AI/ML sophistication and deployment to the extent that it’s becoming increasingly difficult to find something that doesn’t boast some form of AI/ML. (Some people think that the recent news that Google is Using AI to Design its Next Generation of AI Chips More Quickly Than Humans Can signifies “the beginning of the end,” but I remain confident that it’s only “the end of the beginning.”)

One of the things that characterized AI/ML in the “early days” — which, for me, would be about six years ago as I pen these words — was how difficult it all used to be. When these technologies first stuck their metaphorical noses out of the lab, they required data scientists with size-16 brains (the ones with “go-faster” stripes on the sides) to train them to perform useful tasks. The problem is that data scientists are thin on the ground. Also, when you are an embedded systems designer, the last thing you want to do is to spend your life trying to explain to a data scientist what it is you want to do, if you see what I mean (it’s like the old software developer’s joke: “In order to understand recursion, you must first understand recursion”).

Happily, companies are now popping up like mushrooms with new technologies to make things much, much easier for the rest of us. For example, I was recently chatting with the folks at SensiML (pronounced “sense-ee-mel” to rhyme with “sensible”), whose mission it is to help embedded systems designers create AI/ML-equipped systems that run at the edge. SensiML’s role in all of this is to provide the developers with accurate AI/ML sensor algorithms that can run on the smallest IoT devices, along with the tools to make the magic happen.

Consider the SensiML Endpoint AI/ML Workflow as depicted below. The SensiML analytics toolkit suite automates each step of the process for creating and validating optimized AI/ML IoT sensor code. The overall workflow uses a growing library of advanced AL/ML algorithms to generate code that can learn from new data either during the development phase or once deployed.

The SensiML Endpoint AI/ML Workflow (Image source: SensiML)

We start with the Data Capture Lab, which is a fully-fledged, time-series sensor data collection and labeling tool. As the folks at SensiML say, “Collecting and labeling train/test data represents the single greatest development expense and source of differentiation in AI/ML model development. It is also one of the most overlooked and error-prone aspects of building ML models.” One thing I really like about this tool is how you can combine the data being collected with a video of events taking place. Suppose you are trying to train for gesture recognition, for example. Having the time-synchronized video makes it easy for you to identify to the system where each gesture starts and ends.

Next, we have the Analytics Studio, which “uses your labeled datasets to rapidly generate efficient inference models using AutoML and an extensive library of edge-optimized features and classifiers. Using cloud-based model search, Analytics Studio can transform your labeled raw data into high performance edge algorithms in minutes or hours, not weeks or months as with hand-coding. Analytics Studio uses AutoML to tackle the complexities of machine learning algorithm pre-processing, selection, and tuning without reliance on an expert to define and configure these countless options manually.”

The final step is to use TestApp to validate the accuracy of the model in real-time on the intended IoT device. As the chaps and chapesses at SensiML say, “The time gap between model simulation and working IoT device can take weeks or months with traditional design methods. With SensiML’s AutoML workflow culminating with on-device testing using TestApp, developers can get to a working prototype in mere days or weeks.”

There is a cornucopia of content on SensiML’s YouTube channel that will keep you busy for hours, including some that address strangely compelling topics like Cough Detection — Labeling Events. Another good all-rounder is the Predictive Maintenance Fan Demo.

 

Unfortunately, I fear there is far too much to all of this to cover here. If you are interested in learning more, I would strongly suggest that you take the time to peruse and ponder all there is to see on SensiML’s website, after which you can follow up with the YouTube videos. As usual, of course, I welcome your comments and questions.

One thought on “Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
45,068 views