industry news
Subscribe Now

LeapMind Announces Efficiera v2 Ultra-Low Power AI Inference Accelerator IP

Enhance product for wider-range application based on initial market introduction, results and evaluation

November 30, 2021 – Tokyo Japan, LeapMind Co., Ltd., a leading creator of the standard in edge artificial intelligence (AI), today announced version 2 (hereinafter: v2) of the ultra-low power AI inference accelerator IP Efficiera, scheduled to be available in December 2021. Efficiera is highly valued for its power saving, high performance, space saving and performance scalability features. Efficiera v2’s improved features and benefits expand the range of applications by providing wider coverage of performance with broader product availability while maintaining the circuit scale of the minimum configurations. These enhancements are based on learnings from the initial product introduction and market evaluation.

“Last year we officially launched the commercial version of v1 and many companies evaluated Efficiera. By the end of September 2021 we signed license agreements with eight domestic companies. Our corporate motto is concretely that “spreading to the world new devices that use machine learning” which we have set as our corporate philosophy is steadily progressing through the provision of v1. We will continue to strive to popularize AI through further technological innovation and product lineup expansion,” said Soichi Matsuda, CEO of LeapMind.

Efficiera is an ultra-low power AI inference accelerator IP specialized for convolutional neural network (CNN) inference processing that runs as a circuit on field-programmable gate arrays (FPGA) or application-specific integrated circuit (ASIC) devices. The ultra-small quantization technology minimizes the number of quantization bits to 1 – 2 bits, maximizing the power and area efficiency of convolution that accounts for most of inference processing without the need for advanced semiconductor manufacturing processes or special cell libraries. By using this product, deep learning functions can be incorporated into a variety of edge devices, including consumer electronics such as home appliances, industrial equipment such as construction machinery, surveillance cameras, broadcasting equipment, as well as small machines and robots that are constrained by power, cost, and thermals — all of which have been technically difficult in the past.

“Since the official release of v1, we have aimed to develop the world’s most power efficient DNN accelerator. We have strengthened the design and verification method and development process. We developed v2 so that it can respond to adoption in ASICs and application-specific standard products (ASSPs). We are also developing an inference learning model on the deep learning side to maximize the benefits of ultra-small quantization technology. The greatest strength of LeapMind is that we can provide a technique to master these two wheels,” said Dr. Hiroyuki Tokunaga, Director and CTO of LeapMind.

By improving design and verification methodology and reviewing the development process, product quality can be applied not only to FPGA but also to ASIC and ASSP. LeapMind starts to provide a model development environment, Network Development Kit (NDK) which enables users to develop deep learning models for Efficiera, which has not been done previously.

Hardware features

  • Performance scalability up to 48 times by multiplexing MAC array + multi-core.
  • v2 allows you to triple the number of MAC arrays in the Convolution pipeline to v1 (x1, x4 selectable), and expands performance scalability by allowing you to select up to 4 cores.
  • Hardware execution of skip connection and pixel embedding in addition to convolution and quantization.
  • Resources equivalent to Efficiera v1 were used with the same configuration and analyzed to understand execution time and assess if additional hardware functions are required. 

Integration into SoC

  • AMBA AXI interface.
  • AMBA AXI continued to be used as the interface with the external and the interface when viewed as a black box is the same as before however it is now easier to migrate from the existing design.
  • Single clock domain.

Target frequency in FPGA

  • The operating frequency of the FPGA is the same as before and although it depends on the device, about 150 to 250 MHz can be expected.
  • 256 GOP/s @ 125MHz (1 core).
  • Up to 12 TOP/s @ 250MHz (2 cores).
  • Provided as encrypted RTL.

Network Development Kit (NDK) 

  • Package of code and information which is required to create ultra-small quantization DL model for Efficiera.
  • Immediate use is possible for developers working with deep learning model for GPU.
  • DL framework supports PyTorch and TensorFlow 2.
  • The learning environment is a GPU-equipped Linux server.
  • The inference environment is a device equipped with Efficiera.
  • Provide support from LeapMind.

LeapMind is welcoming trials and feedback from all interested parties, including system on chip (SoC) vendors and end-user product designers. To obtain Efficiera v2, please contact us at business@leapmind.io. For more product information, please visit: https://leapmind.io/en/business/ip/

About LeapMind
LeapMind Inc. was founded in 2012 with the corporate mission “To create innovative devices with machine learning and make them available everywhere.” Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company’s strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, many of which are centered in manufacturing, including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware

Head office: Shibuya Dogenzaka Sky Building 5F, 28-1 Maruyama-cho, Shibuya-ku, Tokyo 150-0044
Representative: Soichi Matsuda, CEO
Established: December 2012
URL: https://leapmind.io/en/

Leave a Reply

featured blogs
Jan 24, 2022
I just created a handy-dandy one-page Quick-Quick-Start Guide for seniors that covers their most commonly asked questions pertaining to the iPhone SE....
Jan 24, 2022
In my previous article ( From AMBA ACE to CHI, Why Move for Coherency? ) I talked about how coherency needs have evolved from AMBA ACE to the highly successful and widely adopted CHI architecture.... [[ Click on the title to access the full blog on the Cadence Community site...
Jan 20, 2022
High performance computing continues to expand & evolve; our team shares their 2022 HPC predictions including new HPC applications and processor architectures. The post The Future of High-Performance Computing (HPC): Key Predictions for 2022 appeared first on From Silico...

featured video

AI SoC Chats: Understanding Compute Needs for AI SoCs

Sponsored by Synopsys

Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.

Click here for more information about DesignWare IP for Amazing AI

featured paper

Using the MAX66242 Mobile Application, the Basics

Sponsored by Analog Devices

This application note describes the basics of the near-field communication (NFC)/radio frequency identification (RFID) MAX66242EVKIT board and an application utilizing the NFC capabilities of iOS and Android® based mobile devices to exercise board functionality. It then demonstrates how the application enables the user with the ability to use the memory and secure features of the MAX66242. It also shows how to use the MAX66242 with an onboard I2C temperature sensor which demonstrates the energy harvesting feature of the device.

Click to read more

featured chalk talk

ROHM Automotive LED Driver IC

Sponsored by Mouser Electronics and ROHM Semiconductor

There has been a lot of innovation in the world of automotive designs over the last several years and this innovation also includes the LED lights at the rear of our vehicles. In this episode of Chalk Talk, Amelia Dalton chats with Nick Ikuta from ROHM Semiconductor about ROHM’s automotive LED driver ICs. They take a closer look at why their four channel outputs, energy sharing function, and integrated protection functions make these new driver ICs a great solution for rear lamp design.

Click here for more information about ROHM Semiconductor Automotive Lighting Solutions