feature article
Subscribe Now

AI at the IoT Endpoint

Quicklogic Fosters Sensing Ecosystem

Computation is entering an era of unprecedented heterogeneous distribution. The diverse demands of IoT applications require everything from heavy-iron, deep-learning data-crunching to ultra-low-latency snap recognition and judgment. Our IoT devices and systems must be simultaneously aware and responsive to their own local context and able to harness the power of massive compute resources for more global issues. A self-driving vehicle can’t afford to send gobs of raw sensor data upstream to the cloud and then wait for an answer on target identification to return before deciding whether to brake or swerve. It needs to decide immediately whether or not there’s a human in the crosswalk, but it can wait awhile before rendering an AI judgment on whether the pedestrian’s attire was fashionable.

In intelligent assistants such as Alexa, Google Assistant, and Siri, it would be utterly impractical (and a major privacy intrusion) for every one of the millions of devices in service to send all the audio back to the cloud for wake word recognition. To prevent the communication and compute problem from becoming untenable, we need to do as much of the AI processing as possible locally, without sending data over the network or waking power-hungry application processors. This brand of processing requires a combination of specialized hardware acceleration with miserly power consumption. It requires hardware that can easily adapt to various configurations of sensors at the IoT edge. It requires BOM-friendly low cost and high volume applicability. 

This is QuickLogic’s wheelhouse.

Life would be easy if every engineering team included data scientists who could design the training regimens working hand in hand with hardware experts who could partition the problem between conventional software, programmable hardware, and specialized neural network configuration. But life is not easy. Most projects don’t have access to the wide range of skills and expertise required to optimally engineer an AI endpoint for their IoT design. To make that happen, we need an ecosystem with plug-and-play hardware, software, and AI components and IP that will allow an average engineering project to take advantage of endpoint AI. This month, QuickLogic and several partners are introducing just such an ecosystem.

QuickLogic, along with SensiML, General Vision, and Nepes Corporation, are introducing the “QuickAI” ecosystem and development HDK, which combines QuickLogic’s EOS S3 SoC FPGAs with NM500 neuromorphic processors and AI IP to allow design teams to add endpoint AI to a wide range of applications. The NM500 neuromorphic processor is built by Nepes using IP licensed from General Vision. It features 576 Neurons while consuming a meager 0.1 Watt of power. General Vision’s NeroMem IP provides a scalable, silicon-trainable, low-power network architecture, which the company says is capable of learning and recalling patterns autonomously without the need for high-powered data-center processors for training. Rounding out the QuickAI platform is SensiML’s analytics toolkit, which is designed to help designers quickly and easily build smart sensor algorithms for IoT edge/endpoint devices.

The General Vision NeroMem provides the backbone of the platform, enabling embedded exact and fuzzy pattern matching and learning using a scalable architecture of Radial Basis Function neurons. The architecture is parallel, guaranteeing a fixed latency for any particular number of neurons and delivering high levels of computation at very low clock frequencies for power efficiency. General Vision provides a tool suite and SDK called “Knowledge Builder” that trains and configures the neurons in the NeuroMem network.

The NM500 implements the General Vision NeuroMem in a small form-factor component, which can be trained in the field to recognize patterns in real time. Multiple NM500s can be chained to provide an arbitrary number of neurons. Nepes also provides software tools to be used in configuring and training the NM500 neurons. SensiML’s Analytics Toolkit is designed to automate the management of training data, optimize the choice of feature extraction algorithms, and automate code generation for the resulting AI solution. The QuickLogic EOS S3 voice- and sensor-processing platform performs audio processing and sensor aggregation. In addition to FPGA fabric, it includes ARM Cortex M4F & FFE cores that can pick up the conventional processing chores on a tiny power budget. 

The QuickAI HDK platform is designed as a demo, evaluation, and development platform for endpoint AI applications. It includes the QuickLogic EOS S3 in a “stamp module” that can also be used in production, two Nepes NM500 neuromorphic processors, two PDM microphones, an NRF51822 Bluetooth low energy (BLE) module, a USB to UART, MX25R3235 flash, a MAG AK9915 3-axis magnetic sensor, and a 70-pin expansion connector. The platform is expandable to add more NM 500s for applications that require more neurons. The goal of the HDK is to reduce development time and time to market for endpoint AI applications that involve motion, acoustic, or image processing. 

If you’re designing industrial applications such as vision-based inspection, QuickAI can enable classification of textures such as foods and surfaces that uses high-speed template learning and matching to adapt to changes in materials or color. The FPGA can capture and aggregate sensor data, perform feature extraction using FFT or MFCC, and pass the reduced information along to the NM500 for processing. The FFE provides an ultra-low power AON function accelerator. The result is an easy-to-design, low-power, high-performance adaptable system that can perform advanced pattern matching at the edge without the need to push data up to the cloud for additional crunching.

Starting life as a smaller player in the FPGA market, QuickLogic has pivoted into numerous high-value niche markets where they could take advantage of the unique features of their programmable logic technology for targeted applications. Because of the low-power performance and low cost of their devices, they have carved out a good business in the mobile market, and the current trend toward moving AI to the endpoint in IoT systems has provided fertile ground for this type of platform to succeed. The collaboration with specialized AI players like General Vision, Nepes, and SensiML creates a robust development platform that should eliminate much of the friction for design teams wanting to take advantage of AI technology at the IoT edge. It will be interesting to watch how and where this technology catches on.

One thought on “AI at the IoT Endpoint”

Leave a Reply

featured blogs
Apr 11, 2021
https://youtu.be/D29rGqkkf80 Made in "Hawaii" (camera Ziyue Zhang) Monday: Dynamic Duo 2: The Sequel Tuesday: Gall's Law and Big Ball of Mud Wednesday: Benedict Evans on Tech in 2021... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 8, 2021
We all know the widespread havoc that Covid-19 wreaked in 2020. While the electronics industry in general, and connectors in particular, took an initial hit, the industry rebounded in the second half of 2020 and is rolling into 2021. Travel came to an almost stand-still in 20...
Apr 7, 2021
We explore how EDA tools enable hyper-convergent IC designs, supporting the PPA and yield targets required by advanced 3DICs and SoCs used in AI and HPC. The post Why Hyper-Convergent Chip Designs Call for a New Approach to Circuit Simulation appeared first on From Silicon T...
Apr 5, 2021
Back in November 2019, just a few short months before we all began an enforced… The post Collaboration and innovation thrive on diversity appeared first on Design with Calibre....

featured video

Meeting Cloud Data Bandwidth Requirements with HPC IP

Sponsored by Synopsys

As people continue to work remotely, demands on cloud data centers have never been higher. Chip designers for high-performance computing (HPC) SoCs are looking to new and innovative IP to meet their bandwidth, capacity, and security needs.

Click here for more information

featured paper

From Chips to Ships, Solve Them All With HFSS

Sponsored by Ansys

There are virtually no limits to the design challenges that can be solved with Ansys HFSS and the new HFSS Mesh Fusion technology! Check out this blog to know what the latest innovation in HFSS 2021 can do for you.

Click here to read the blog post

Featured Chalk Talk

Smart Embedded Vision with PolarFire FPGAs

Sponsored by Mouser Electronics and Microchip

In embedded vision applications, doing AI inference at the edge is often required in order to meet performance and latency demands. But, AI inference requires massive computing power, which can exceed our overall power budget. In this episode of Chalk Talk, Amelia Dalton talks to Avery Williams of Microchip about using FPGAs to get the machine vision performance you need, without blowing your power, form factor, and thermal requirements.

More information about Microsemi / Microchip PolarFire FPGA Video & Imaging Kit