feature article
Subscribe Now

More AI at the Edge

Lattice Launches sensAI Stack

AI in edge devices is expected to experience mind-blowing growth over the next half decade, with predictions exceeding 100% CAGR. That means our IoT devices are going to be getting smart, and doing it quickly. Most of the applications of AI in edge or endpoint devices centers on sensors. Our devices are equipped with increasing numbers of increasingly complicated sensors, drowning our devices in data. In most applications, however, it isn’t practical to simply push all that raw sensor data up to the cloud for further processing. We need to make some sense of it now, at the edge, embedded inside our device.

Most AI schemes involve discrete steps for training and inferencing. Training can be accomplished at our leisure in computing-rich, floating-point-having, plentifully powered heavy iron computing environments. Then, for in-the-field inferencing, the trained machine can be ported to our embedded system in a way that allows hyper-efficient, real-time analysis at the edge.

This week, at the 2018 Embedded Vision Summit in Santa Clara, Lattice Semiconductor will unveil sensAI, the most recent FPGA-based platform and ecosystem aimed at enabling AI at the edge for a variety of applications including mobile, smart home, smart city, smart factory, and smart car. Lattice says sensAI is a “complete technology stack” that combines modular hardware kits, neural network IP cores, software tools, reference designs, and custom design services – all aimed at accelerating the development of machine learning inferencing for IoT edge devices.

Lattice says sensAI is “Optimized to provide the ultra-low power (under 1 mW–1 W), small size, and production-priced (~$1-$10 USD) benefits of ASICs, with FPGA flexibility to support evolving algorithms, interfaces, and tailored performance.” Interestingly, Lattice has built their solution around two different FPGA options. The ECP5, which is the fifth generation of Lattice’s value-based FPGA, handles the “high end” of the sensAI solution, where video processing is required, and applications that require less performance, smaller form factor, and ultra-low-power consumption can use iCE40 UltraPlus – a tiny FPGA based on technology gained with the acquisition of SiliconBlue several years ago.

It’s safe to assume that the Lattice engineers who created the initial ECP FPGA family (internally dubbed “el-cheapo” at the time) and the SiliconBlue engineers who created the predecessor to today’s iCE families would never have guessed that their designs would someday be powering neural networks to extract features from sensor data in IoT edge applications. In both cases, the FPGA architecture was originally designed to fill a niche where more expensive and power-hungry FPGAs from the large suppliers couldn’t work. These were glue-logic devices whose role in life was to fill the lucrative but unglamorous role of bridging interfaces and handling the messy interstices of larger logic designs. Now, the descendants of those chips have found new and exciting life in leading roles on the front edge of today’s technology curve.

Lattice’s solutions include two modular hardware platforms – the ECP5™ device-based Video Interface Platform (VIP), which includes Lattice’s Embedded Vision Development Kit, and the iCE40 UltraPlus device-based Mobile Development Platform (MDP) aimed at handheld and battery-powered devices. Both solutions bring AI-specific IP cores such as a Convolutional Neural Network (CNN) accelerator and a Binarized Neural Network (BNN) accelerator. They also include a neural network compiler tool for Caffe/TensorFlow to FPGA, in addition to the usual Lattice “Radiant” and “Diamond” FPGA development packages. To get you jump-started on your AI implementation, Lattice includes reference designs such as face detection, key phrase detection, object counting, face tracking, and speed sign detection. These are some of the most common types of applications for design teams wanting inferencing at the edge, and they also provide good exemplars for similar applications.

Lattice says that their solutions are “optimized for ultra-low power consumption (under 1 mW–1 W), small package size (5.5 mm2 –100 mm2), interface flexibility (MIPI® CSI-2, LVDS, GigE, etc.), and high-volume pricing (~$1-$10 USD). In each case, you can assume that the lower end of those numbers is the iCE-based solution and the higher end the ECP5. Putting the stack to work at two distinct points on the power/performance/cost curve is a slick move by Lattice, taking their silicon and solutions to a broader set of applications and positioning themselves as an edge-AI supplier with more than one trick up their sleeve.

Because AI can be intimidating for many engineering teams, Lattice has partnered with a number of design service companies to deliver custom solutions for broad market applications, including smart home, smart city, and smart factory. If you don’t want to dive into the deep end of AI world on your first try, it’s nice to know that there is expert help available for most of the common types of applications, and that those experts are already competent with Lattice’s solution and ecosystem.

Last week, we looked at a similar edge-based AI solution, QuickLogic’s QuickAI ecosystem, and, while the two offerings have similar goals, it is interesting to see the two differing approaches side by side. In the case of Lattice, the solution relies more heavily on the FPGA component, implementing the neural network inferencing as FPGA IP. In QuickLogic’s case, the neural network load is handled by stand-alone neuromorphic processors. This may make Lattice’s solution a bit more “one stop shopping” and possibly more BOM-friendly (looking at the entire solution), while QuickLogic’s ecosystem may bring more neural network performance and scalability because of the specialized neuromorphic chips included in their offering.

AI at the edge tends to be mass market by its very nature. The compute challenge is posed by large numbers of sensor-laden nodes processing piles of data at the edge when uploading to the cloud is impractical. That means the platforms addressing the issue have to be mass market friendly, with BOM costs at a minimum, power consumption under control, and backed away from the bleeding edge on form factors that would make board cost prohibitive. Since Lattice has spent the last decade focusing on consumer and mass-market applications, the company is well acquainted with the requirements for delivering solutions in that market, and that experience should serve them well as they attack the AI opportunity.

It is interesting to see the evolution of Lattice from a “me too” FPGA supplier to a leader in their own right in rapid-growth markets that traditional FPGA companies have passed by. As the technology landscape shifts with the slowdown of Moore’s Law, the explosion of AI, and the rapid deployment of IoT technology, it’s anybody’s guess as tp which pieces of technology will become the must-have defensible solutions that capture emerging key markets. AI at the edge seems like a pretty good bet.

One thought on “More AI at the Edge”

Leave a Reply

featured blogs
Oct 26, 2020
Do you have a gadget or gizmo that uses sensors in an ingenious or frivolous way? If so, claim your 15 minutes of fame at the virtual Sensors Innovation Fall Week event....
Oct 26, 2020
Last week was the Linley Group's Fall Processor Conference. The conference opened, as usual, with Linley Gwenap's overview of the processor market (both silicon and IP). His opening keynote... [[ Click on the title to access the full blog on the Cadence Community s...
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through-hole products, or a single or double row surface mount with a larger centerline rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and con...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

See Inuitive’s NU4000 3D imaging and vision processor in action. The SoC supports high-quality 3D depth processor engine, SLAM accelerators, computer vision, and deep learning by integrating Synopsys ARC EV processor. In this demo, the NU4000 demonstrates simultaneous 3D sensing, SLAM and CNN functionality by mapping out its environment and localizing the sensor while identifying the objects within it. For more information, visit inuitive-tech.com.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured Paper

New package technology improves EMI and thermal performance with smaller solution size

Sponsored by Texas Instruments

Power supply designers have a new tool in their effort to achieve balance between efficiency, size, and thermal performance with DC/DC power modules. The Enhanced HotRod™ QFN package technology from Texas Instruments enables engineers to address design challenges with an easy-to-use footprint that resembles a standard QFN. This new package type combines the advantages of flip-chip-on-lead with the improved thermal performance presented by a large thermal die attach pad (DAP).

Click here to download the whitepaper

Featured Chalk Talk

Wide Band Gap: Silicon Carbide

Sponsored by Mouser Electronics and ON Semiconductor

Wide bandgap materials such as silicon carbide are revolutionizing the power industry. From electric vehicles and charging stations to solar power to industrial power supplies, wide bandgap brings efficiency, improved thermal performance, size reduction, and more. In this episode of Chalk Talk, Amelia Dalton chats with Brandon Becker from ON Semiconductor about the advantages of silicon carbide diodes and MOSFETs.

Click here for more information about ON Semiconductor Wide Bandgap SiC Devices