feature article
Subscribe Now

Why, Hello FPGA and AI — How Nice to See You Together!

I love science and technology. I also love beer and bacon sandwiches, but that’s nothing to do with this column, so don’t try to distract me. Generally speaking, I like to think I have a ringside seat at the forefront of technology. Recently, however, I must admit to starting to fear I was no longer riding the crest of the technology wave with regard to things like artificial intelligence (AI) and machine learning (ML).

In fact, I was preparing to start wearing my sad face when three things came along to cheer me up again: NanoEdge AI Studio from Cartesiam, the TinyML book by Pete Warden & Daniel Situnayake, and the Hello FPGA Kit from Microchip Technology.

As a throw-away factoid, I recently read that approximately 15 billion microcontrollers are currently sold each year, and this number is rising exponentially. I don’t care what anyone says; where I come from, that’s a lot of microcontrollers.

According to the International Data Corporation (IDC), there are currently around 22 million software developers in the world. The majority of these write high-level application software for things like PCs, workstations, and servers. Only about 1.2 million focus on embedded systems — which essentially means writing code to run on microcontrollers — and only about 0.2% of these have even minimal AI/ML skills.

On the bright side, at least I’m not alone. On the downside, the world is crying out for embedded systems that are equipped with AI/ML capabilities, but who is going to create the little rascals?

When most people hear the terms AI and ML, they tend to think of computer vision applications involving capabilities like object detection and recognition. While these applications are awesome in their own right, they represent only a small portion of potential AI/ML deployments.

Consider an industrial facility, for example. At the time of this writing, in the USA alone, there are trillions of dollars-worth of old equipment like motors and generators and pumps and… you get the idea… unobtrusively chugging away, doing their thing. Many of these little scamps have only rudimentary automatic monitoring and control systems at best. If they are lucky, there might be a temperature sensor and/or an oil level sensor, along with some basic control system that will power the machine down if the sensor readings move outside their specified bounds.

Now consider how, while strolling around, an engineer who has decades of experience in that factory might hear an unexpected (almost undetectable) sound as they pass a piece of apparatus, or — laying their hands on a machine — feel an unusual vibration, and intuitively determine that something untoward was happening.

This is a perfect target for an AI/ML system equipped with modern sensors like accelerometers and microphones augmenting traditional sensors measuring things like temperature, pressure, and flow. If such an AI/ML system is trained to know how the machine behaves when all is well, then it can be used to detect slight anomalies and spot trends and alert its human colleagues when any potential problems are heading their way.

NanoEdge AI Studio

As Jim Turley wrote in his column, Machine Learning at the Push of a Button, here on EE Journal a few weeks ago:

Cartesiam’s thesis is that ML is hard, and that developing embedded AI requires special skills that most of us don’t have. You could hire a qualified data scientist to analyze your system and develop a good model, but such specialists are hard to find and expensive when they’re available. Plus, your new hire will probably need a year or so to complete their analysis – and that’s before you start coding or even know what sort of hardware you’ll need. 

Instead, Cartesiam figures that most smart IoT devices have certain things in common and don’t need their own full-time, dedicated data scientist to figure things out, just like you don’t need a compiler expert to write C code or a physicist to draw a schematic. Let the tool do the work.

Ah, “AI requires special skills that most of us don’t have.” Truer words have rarely been spoken. Another consideration is that training traditional AI/ML systems, like computer vision applications, takes humongous data sets. In the case of an organization like Google, it’s relatively easy to throw a million tagged images saying “contains a heffalump” or “no heffalumps here” and train the AI/ML system accordingly. But how about two essentially identical pumps located 20 meters apart in a factory, where one is mounted on a concrete floor while the other is mounted on a steel mesh platform, and one is attached to short lengths of piping while the other’s piping runs are substantially longer. The end result is that the audio and vibration signatures of these machines may be radically different, even though both devices are happily purring along without a care in the world. 

This is where Cartesiam’s NanoEdge AI Studio comes into play. It starts by asking you some basic questions, like which processor you wish to use (current choices are ARM Cortex M0, M0+, M3, M4, and M7) and how much memory you wish to devote to the task (you can go as low as 4 KB of RAM). You also tell it which sensors you intend to use — not specific part numbers, just the general numbers and types like “two temperature sensors and one 3-axis accelerometer.”

NanoEdge AI Studio then cogitates and ruminates and generates the best AI/ML system for the task out of 500 million different possibilities. This system is presented in the form of a C library that you incorporate into your main microcontroller program.

Design flow with NanoEdge AI Studio (Image source: Cartesiam.ai)

The clever part comes when you install your new system on the machine in question. First, you enter a training mode when you leave the machine running 24/7 for, say a week, to train your AI/ML model. After this, you switch to run mode in which the system monitors the machine looking for anomalies and unusual patterns of behavior. I’m currently planning on building such a system myself and will report back on my progress in the not-so-distant future.

TinyML: Machine Learning on Ultra-Low-Power MCUs

The full (and fulsome) title of this book is “TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers,” but I was worried this would be a bit of a mouthful for a column section title.

Talking about lengths of titles, I’m reminded of the modern classic “How to Sharpen Pencils: A Practical & Theoretical Treatise on the Artisanal Craft of Pencil Sharpening for Writers, Artists, Contractors, Flange Turners, Anglesmiths & Civil Servants, with Illustrations Showing Current Practice” by David Rees — a tome that should grace every bookshelf, and which certainly has pride of place in my own office, but we digress…

Prior to reading this book, I had only a vague high-level understanding as to how all of this worked along the lines of:

  • Create an artificial neural network (ANN) architecture using some number of layers each with some number of neurons based on 32-bit floating-point coefficients.
  • Generate some sort of training dataset.
  • Train the network using the dataset.
  • Convert and optimize the 32-bit floating-point network into a 16-bit or 8-bit fixed-point equivalent inferencing engine in a form suitable for deployment on the target platform (MCU, GPU, FPGA, etc.).

Having said this, I really didn’t know any of the nitty-gritty details. This is where TinyML really scores, because the authors begin by introducing you to a bunch of free software and explaining all of the fundamental concepts so clearly that even I understood them. They then walk you step-by-step through the process of creating and training a simple network, analyzing the results, making modifications, and doing everything again and again until you get the results you are looking for.

Until reading this book, I also hadn’t realized how much progress had been made with regard to creating scripts that help you in creating, running, analyzing, optimizing, and deploying your networks.

And, speaking of deploying your networks, the authors recommend a number of boards you can use to implement their experiments, including the Arduino Nano 33 BLE. In addition to its 32-bit Arm Cortex-M4 processor running at 64 MHz, this board — which costs only $26 on Amazon Prime — boasts a microphone; a 9-axis inertial sensor; temperature, humidity, and barometric pressure sensors; and a gesture, proximity, light color, and light intensity sensor. In one of the experiments, you are guided to create a “Magic Wand” that can detect different gestures as you wave your wand around and use them to control things. If only I’d had one of these when I was reading Harry Potter!

Hello FPGA

All of which brings us to the Hello FPGA Kit from Microchip Technology. As it says in the brochure:

Low-cost and compact-sized, the Hello FPGA kit is for anyone with low to medium FPGA knowledge. Based on the SmartFusion2 SoC FPGA with its Arm Cortex-M3 processor with embedded Flash, the kit supports power demos and provides a versatile platform for you to prototype your designs.

With Arduino and mikroBUS connectors, Hello FPGA supports additional expansion boards for more prototyping capabilities. Hello FPGA can be used to develop applications from simple control logic to data acquisition, image processing, signal processing and Artificial Intelligence (AI) applications.

The kit measures live FPGA core power consumption and enables users to use the unique Flash*Freeze mode while maintaining the I/O state for low power applications. The Hello FPGA board includes a Microchip PIC32 MCU that is used to program the SmartFusion2 SoC FPGA, monitor power, and perform general housekeeping functions.

In preparation for creating your own designs, the Hello FPGA Kit is supported by a graphical user interface and existing projects that you can download and run. These include a DSP application in the form of a filter, an image processing application, and… wait for it… wait for it… an AI digit recognition application.

In addition to the main Hello FPGA board, The Hello FPGA kit also includes a camera sensor board and an LCD display as illustrated below:

The Hello FPGA Kit (Image source: Microchip Technology)

If the truth be told, this kit was originally planned to be released this week (i.e., the week of this column’s posting), but a sneaky little pandemic in the form of COVID-19 derailed things a tad. I now hear Hello FPGA should hit the streets in June.

But turn that frown upside down into a smile, because at least one of our number already has a brand spanking new Hello FPGA Kit in his possession. Yes, of course I’m talking about me.

Actually, I feel a little silly, because when I originally saw the image above, I vaguely thought that the camera sensor must be of some new type with which I was unfamiliar. I thought the same thing later when I opened my Hello FPGA Kit, but then I said, “That can’t be right” to myself. It turns out that what we see in the image is a plastic cover. Removing this cover reveals a lens that looks suspiciously like a miniature HAL 9000 optical sensor (just sayin’).

But wait, there’s more, because the folks at Microchip have informed me they will now give three Hello FPGA Kits away in a random drawing of all the folks who attend my What the FAQ is an FPGA? presentation at the forthcoming Embedded Online Conference.

This conference was originally planned to be a 1-day affair taking place on Wednesday 20 May 2020, but I just received a message from the organizers, who spake as follows:

The conference now has 40+ sessions and includes a Virtual Show Floor, with Product Demos, Theatre Talks, and a new Workshops Section for your learning pleasure. 

With so much quality content, we’ve made the decision to make the conference a 2-day event. This simply means that the talks will be released over a 2-day schedule, which will be announced and published on the conference’s website later this week.

Well, that is good news. Even better is the fact that anyone who is registered for the conference can watch as many sessions as they like and ask questions of the presenters. Furthermore, if you don’t have time to watch all the sessions that interest you, they will remain available for registered attendees to peruse and ponder for several days after the conference.

One slight fly in the soup is that the registration fee has now risen to $290, but I was just looking at the Registration Form and — if you are currently working from home — the organizers say you can use coupon code CORONA90 when you register and pay only $90.

My mom told me to tell you that $90 is an amazingly good deal to hear my presentation — and to also mention you would have access to the 39+ other presentations for free — but (remembering that I pride myself on my humility) I told her it would be immodest of me to mention this, so we’ll say no more about it.

How about you? Have you dabbled your toes in the AI waters? If not, might you be tempted to look into Cartesiam’s NanoEdge AI Studio technology, read the TinyML book, or experiment with Microchip’s Hello FPGA Kit? In all cases, I’d love to hear what you think.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,820 views