feature article
Subscribe Now

AI at the IoT Endpoint

Quicklogic Fosters Sensing Ecosystem

Computation is entering an era of unprecedented heterogeneous distribution. The diverse demands of IoT applications require everything from heavy-iron, deep-learning data-crunching to ultra-low-latency snap recognition and judgment. Our IoT devices and systems must be simultaneously aware and responsive to their own local context and able to harness the power of massive compute resources for more global issues. A self-driving vehicle can’t afford to send gobs of raw sensor data upstream to the cloud and then wait for an answer on target identification to return before deciding whether to brake or swerve. It needs to decide immediately whether or not there’s a human in the crosswalk, but it can wait awhile before rendering an AI judgment on whether the pedestrian’s attire was fashionable.

In intelligent assistants such as Alexa, Google Assistant, and Siri, it would be utterly impractical (and a major privacy intrusion) for every one of the millions of devices in service to send all the audio back to the cloud for wake word recognition. To prevent the communication and compute problem from becoming untenable, we need to do as much of the AI processing as possible locally, without sending data over the network or waking power-hungry application processors. This brand of processing requires a combination of specialized hardware acceleration with miserly power consumption. It requires hardware that can easily adapt to various configurations of sensors at the IoT edge. It requires BOM-friendly low cost and high volume applicability. 

This is QuickLogic’s wheelhouse.

Life would be easy if every engineering team included data scientists who could design the training regimens working hand in hand with hardware experts who could partition the problem between conventional software, programmable hardware, and specialized neural network configuration. But life is not easy. Most projects don’t have access to the wide range of skills and expertise required to optimally engineer an AI endpoint for their IoT design. To make that happen, we need an ecosystem with plug-and-play hardware, software, and AI components and IP that will allow an average engineering project to take advantage of endpoint AI. This month, QuickLogic and several partners are introducing just such an ecosystem.

QuickLogic, along with SensiML, General Vision, and Nepes Corporation, are introducing the “QuickAI” ecosystem and development HDK, which combines QuickLogic’s EOS S3 SoC FPGAs with NM500 neuromorphic processors and AI IP to allow design teams to add endpoint AI to a wide range of applications. The NM500 neuromorphic processor is built by Nepes using IP licensed from General Vision. It features 576 Neurons while consuming a meager 0.1 Watt of power. General Vision’s NeroMem IP provides a scalable, silicon-trainable, low-power network architecture, which the company says is capable of learning and recalling patterns autonomously without the need for high-powered data-center processors for training. Rounding out the QuickAI platform is SensiML’s analytics toolkit, which is designed to help designers quickly and easily build smart sensor algorithms for IoT edge/endpoint devices.

The General Vision NeroMem provides the backbone of the platform, enabling embedded exact and fuzzy pattern matching and learning using a scalable architecture of Radial Basis Function neurons. The architecture is parallel, guaranteeing a fixed latency for any particular number of neurons and delivering high levels of computation at very low clock frequencies for power efficiency. General Vision provides a tool suite and SDK called “Knowledge Builder” that trains and configures the neurons in the NeuroMem network.

The NM500 implements the General Vision NeuroMem in a small form-factor component, which can be trained in the field to recognize patterns in real time. Multiple NM500s can be chained to provide an arbitrary number of neurons. Nepes also provides software tools to be used in configuring and training the NM500 neurons. SensiML’s Analytics Toolkit is designed to automate the management of training data, optimize the choice of feature extraction algorithms, and automate code generation for the resulting AI solution. The QuickLogic EOS S3 voice- and sensor-processing platform performs audio processing and sensor aggregation. In addition to FPGA fabric, it includes ARM Cortex M4F & FFE cores that can pick up the conventional processing chores on a tiny power budget. 

The QuickAI HDK platform is designed as a demo, evaluation, and development platform for endpoint AI applications. It includes the QuickLogic EOS S3 in a “stamp module” that can also be used in production, two Nepes NM500 neuromorphic processors, two PDM microphones, an NRF51822 Bluetooth low energy (BLE) module, a USB to UART, MX25R3235 flash, a MAG AK9915 3-axis magnetic sensor, and a 70-pin expansion connector. The platform is expandable to add more NM 500s for applications that require more neurons. The goal of the HDK is to reduce development time and time to market for endpoint AI applications that involve motion, acoustic, or image processing. 

If you’re designing industrial applications such as vision-based inspection, QuickAI can enable classification of textures such as foods and surfaces that uses high-speed template learning and matching to adapt to changes in materials or color. The FPGA can capture and aggregate sensor data, perform feature extraction using FFT or MFCC, and pass the reduced information along to the NM500 for processing. The FFE provides an ultra-low power AON function accelerator. The result is an easy-to-design, low-power, high-performance adaptable system that can perform advanced pattern matching at the edge without the need to push data up to the cloud for additional crunching.

Starting life as a smaller player in the FPGA market, QuickLogic has pivoted into numerous high-value niche markets where they could take advantage of the unique features of their programmable logic technology for targeted applications. Because of the low-power performance and low cost of their devices, they have carved out a good business in the mobile market, and the current trend toward moving AI to the endpoint in IoT systems has provided fertile ground for this type of platform to succeed. The collaboration with specialized AI players like General Vision, Nepes, and SensiML creates a robust development platform that should eliminate much of the friction for design teams wanting to take advantage of AI technology at the IoT edge. It will be interesting to watch how and where this technology catches on.

One thought on “AI at the IoT Endpoint”

Leave a Reply

featured blogs
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....
Sep 30, 2022
Wow, September has flown by. It's already the last Friday of the month, the last day of the month in fact, and so time for a monthly update. Kaufman Award The 2022 Kaufman Award honors Giovanni (Nanni) De Micheli of École Polytechnique Fédérale de Lausanne...
Sep 29, 2022
We explain how silicon photonics uses CMOS manufacturing to create photonic integrated circuits (PICs), solid state LiDAR sensors, integrated lasers, and more. The post What You Need to Know About Silicon Photonics appeared first on From Silicon To Software....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Current Sense Amplifiers: What Are They Good For?

Sponsored by Mouser Electronics and Analog Devices

Not sure what current sense amplifiers are and why you would need them? In this episode of Chalk Talk, Amelia Dalton chats with Seema Venkatesh from Analog Devices about the what, why, and how of current sense amplifiers. They take a closer look at why these high precision current sense amplifiers can be a critical addition to your system and how the MAX40080 current sense amplifiers can solve a variety of design challenges in your next design. 

Click here for more information about Maxim Integrated MAX40080 Current-Sense Amplifiers