feature article
Subscribe Now

AI at the IoT Endpoint

Quicklogic Fosters Sensing Ecosystem

Computation is entering an era of unprecedented heterogeneous distribution. The diverse demands of IoT applications require everything from heavy-iron, deep-learning data-crunching to ultra-low-latency snap recognition and judgment. Our IoT devices and systems must be simultaneously aware and responsive to their own local context and able to harness the power of massive compute resources for more global issues. A self-driving vehicle can’t afford to send gobs of raw sensor data upstream to the cloud and then wait for an answer on target identification to return before deciding whether to brake or swerve. It needs to decide immediately whether or not there’s a human in the crosswalk, but it can wait awhile before rendering an AI judgment on whether the pedestrian’s attire was fashionable.

In intelligent assistants such as Alexa, Google Assistant, and Siri, it would be utterly impractical (and a major privacy intrusion) for every one of the millions of devices in service to send all the audio back to the cloud for wake word recognition. To prevent the communication and compute problem from becoming untenable, we need to do as much of the AI processing as possible locally, without sending data over the network or waking power-hungry application processors. This brand of processing requires a combination of specialized hardware acceleration with miserly power consumption. It requires hardware that can easily adapt to various configurations of sensors at the IoT edge. It requires BOM-friendly low cost and high volume applicability. 

This is QuickLogic’s wheelhouse.

Life would be easy if every engineering team included data scientists who could design the training regimens working hand in hand with hardware experts who could partition the problem between conventional software, programmable hardware, and specialized neural network configuration. But life is not easy. Most projects don’t have access to the wide range of skills and expertise required to optimally engineer an AI endpoint for their IoT design. To make that happen, we need an ecosystem with plug-and-play hardware, software, and AI components and IP that will allow an average engineering project to take advantage of endpoint AI. This month, QuickLogic and several partners are introducing just such an ecosystem.

QuickLogic, along with SensiML, General Vision, and Nepes Corporation, are introducing the “QuickAI” ecosystem and development HDK, which combines QuickLogic’s EOS S3 SoC FPGAs with NM500 neuromorphic processors and AI IP to allow design teams to add endpoint AI to a wide range of applications. The NM500 neuromorphic processor is built by Nepes using IP licensed from General Vision. It features 576 Neurons while consuming a meager 0.1 Watt of power. General Vision’s NeroMem IP provides a scalable, silicon-trainable, low-power network architecture, which the company says is capable of learning and recalling patterns autonomously without the need for high-powered data-center processors for training. Rounding out the QuickAI platform is SensiML’s analytics toolkit, which is designed to help designers quickly and easily build smart sensor algorithms for IoT edge/endpoint devices.

The General Vision NeroMem provides the backbone of the platform, enabling embedded exact and fuzzy pattern matching and learning using a scalable architecture of Radial Basis Function neurons. The architecture is parallel, guaranteeing a fixed latency for any particular number of neurons and delivering high levels of computation at very low clock frequencies for power efficiency. General Vision provides a tool suite and SDK called “Knowledge Builder” that trains and configures the neurons in the NeuroMem network.

The NM500 implements the General Vision NeuroMem in a small form-factor component, which can be trained in the field to recognize patterns in real time. Multiple NM500s can be chained to provide an arbitrary number of neurons. Nepes also provides software tools to be used in configuring and training the NM500 neurons. SensiML’s Analytics Toolkit is designed to automate the management of training data, optimize the choice of feature extraction algorithms, and automate code generation for the resulting AI solution. The QuickLogic EOS S3 voice- and sensor-processing platform performs audio processing and sensor aggregation. In addition to FPGA fabric, it includes ARM Cortex M4F & FFE cores that can pick up the conventional processing chores on a tiny power budget. 

The QuickAI HDK platform is designed as a demo, evaluation, and development platform for endpoint AI applications. It includes the QuickLogic EOS S3 in a “stamp module” that can also be used in production, two Nepes NM500 neuromorphic processors, two PDM microphones, an NRF51822 Bluetooth low energy (BLE) module, a USB to UART, MX25R3235 flash, a MAG AK9915 3-axis magnetic sensor, and a 70-pin expansion connector. The platform is expandable to add more NM 500s for applications that require more neurons. The goal of the HDK is to reduce development time and time to market for endpoint AI applications that involve motion, acoustic, or image processing. 

If you’re designing industrial applications such as vision-based inspection, QuickAI can enable classification of textures such as foods and surfaces that uses high-speed template learning and matching to adapt to changes in materials or color. The FPGA can capture and aggregate sensor data, perform feature extraction using FFT or MFCC, and pass the reduced information along to the NM500 for processing. The FFE provides an ultra-low power AON function accelerator. The result is an easy-to-design, low-power, high-performance adaptable system that can perform advanced pattern matching at the edge without the need to push data up to the cloud for additional crunching.

Starting life as a smaller player in the FPGA market, QuickLogic has pivoted into numerous high-value niche markets where they could take advantage of the unique features of their programmable logic technology for targeted applications. Because of the low-power performance and low cost of their devices, they have carved out a good business in the mobile market, and the current trend toward moving AI to the endpoint in IoT systems has provided fertile ground for this type of platform to succeed. The collaboration with specialized AI players like General Vision, Nepes, and SensiML creates a robust development platform that should eliminate much of the friction for design teams wanting to take advantage of AI technology at the IoT edge. It will be interesting to watch how and where this technology catches on.

One thought on “AI at the IoT Endpoint”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

ADI Pressure Sensing Solutions Enable the Future of Industrial Intelligent Edge
The intelligent edge enables greater autonomy, sustainability, connectivity, and security for a variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Maurizio Gavardoni from Analog Devices explore how the intelligent edge is driving a transformation in industrial automation, the role that pressure sensing solutions play in IIoT designs and how Analog Devices is reshaping pressure sensor manufacturing with single flow calibration.
Aug 2, 2024
60,239 views