feature article
Subscribe Now

Brainchip Debuts Neuromorphic Chip

Akida Neuromorphic SoC Takes on CNNs

Convolutional Neural Networks (CNNs)  have been dominating the discussion on AI advancement for the past couple of years. But CNNs have one glaring weakness – a heavy reliance on massive amounts of multiplication. This huge arithmetic obstacle has led to a plethora of initiatives to accelerate both the training and inference phases of deep learning with CNNs, and a wide variety of hardware and software architectures designed to improve CNN performance and efficiency – both in the data center and at the edge. FPGAs, GPUs, and a range of specialized hardware architectures are competing to capture what is expected to be an enormous market for AI computing over the coming decades.

Brainchip is taking a different approach.

Brainchip is a public company (listed on the Australian stock exchange) who has spent the past decade developing “neuromorphic” computing hardware and software based on spiking neural networks (SNNs). Structurally, SNNs are closer to biological neurons than their CNN cousins. According to Brainchip’s Bob Beachler, this allows SNNs to operate with a much smaller computational requirement, allowing more efficient inferencing at the edge. Where CNNs rely on linear algebra requiring matrix multiplication, rectified linear units for activation, pooling layers, “fully connected” layers, and very large datasets for training (typically run off-chip in data centers), SNNs use threshold logic and connection reinforcement via “spikes” and feed-forward training which can be done on- or off-chip with shorter training cycles and continuous learning.

Historically, SNNs have gotten less attention than CNNs – partly because they were thought to be more computationally demanding. So, what gives with Brainchip?

Brainchip’s software component “Brainchip Studio” is based on application software developed by SpikeNet Technology (which was acquired by Brainchip in 2017) and Centre de Recherche Cerveau et Cognition. According to the company, Brainchip Studio is a supervised learning application that can be trained instantaneously, has high accuracy, requires very little power and excels in particular where large training datasets are not available.

This week, Brainchip is announcing a new chip called Akida – designed to implement SNNs in edge computing applications – primarily for embedded vision and image recognition, but also for financial analysis and cybersecurity. Akida is expected to sample in late 2019, with a cost of about $10. This puts it squarely in competition with other devices such as Intel’s Movidius for the lucrative edge/inferencing market. Brainchip claims that Akida can deliver significantly better performance per watt than Intel’s Movidius Myriad 2 VPU, which should be a compelling advantage in power-stingy edge and embedded applications.

Beachler says Akida packs 1.2 million neurons and 10B synapses in an 11-layer SNN along with a RISC processor to work its magic – which should be up to 1,400 frames per second per watt. Those are impressive numbers – particularly for a $10 chip. That kind of cost- and power-performance puts Akida in a position where FPGAs (one of the hot contenders for inferencing applications) cannot go. Brainchip also claims excellent accuracy at those power and performance levels – comparable to what is achieved by CNN approaches.

Starting on the input side, Akida has both sensor interfaces (for embedded applications) and data interfaces (for co-processor applications). The sensor interfaces include pixel, audio, DVS, analog, and digital. Data interfaces include PCIe, USB 3.0, Ethernet, CAN, and UART. These interfaces drive a “conversion complex” whose job it is to convert the sensor and data interface outputs to spikes. These spikes are then passed to the Akida neuron fabric.

In an SNN, spikes at the inputs of a neuron are integrated over time and magnitude, and when that integral exceeds a certain threshold, a corresponding spike is generated at the neuron’s output. Because of this architecture, there is considerably less transistor toggling for each neuron event than one sees in a CNN implementation, and that orders-of-magnitude reduction in switching means dramatically lower power consumption – assuming other variables such as semiconductor process are equivalent.

One interesting element of the SNN vs CNN universe is the possibility of in-system, ongoing training, versus pre-training with large datasets in a data center/cloud environment. But, it’s not clear how this will play out in the real-world application space yet. Beachler says that in applications such as financial analysis, unsupervised learning can offer big benefits.

Brainchip’s test chip benchmarking results are impressive – claiming 1,100 fps with the $10 chip on CIFAR-10 with 82% accuracy, using less than 0.2 Watts – about 6K fps/watt. The company says this compares with 83% at 6K fps/watt from IBM’s “True North” (at a cost of around $1K) and 80% from Xilinx ZC709 at 6K fps/watt (also around $1K).

Akida’s efficiency is due to a number of factors. SNNs are “math-lite” with no MACs or “weight swapping” required. Akida’s use of a fixed neuron model with right-sized synapses and minimized on-chip RAM (6MB compared with 30-50MB for CNN implementations) helps power efficiency as well. A global “spike bus” connects all neural processors, and training and firing thresholds are programmable. Brainchip says their flexible neural processor cores are highly optimized to perform convolutions. Akida is multi-chip expandable to 1.2 billion neurons, so it should be easy to scale an Akida implementation to fit your application needs.

Brainchip’s development environment is available in Q3 2018, and should be accessible to an FPGA-based acceleration board in advance of availability of the Akida chip in 2019. It will be interesting to watch how Akida’s SNN approach competes with CNN-based devices/approaches in the same edge/embedded markets. Certainly the low cost and low power consumption of Akida are compelling, and the SNN approach appears to have merits and (according to the company’s benchmarking) should be competitive in accuracy. However, it’s a long time until these chips will hit distribution in 2019, and this is a very fast-evolving market and technology.

One thought on “Brainchip Debuts Neuromorphic Chip”

  1. So are they as easily fooled as CNN’s that yield horrible classification errors?

    I’d rather bet lives on real algorithms that are verifiable, rather than the probably works most of the time “magic”.

Leave a Reply

featured blogs
Aug 12, 2020
What if I told you I knew someone who could improve your regression efficiency: make fewer runs, spend less runtime on the runs you do make, and have the same coverage at the end? You'd say that... [[ Click on the title to access the full blog on the Cadence Community s...
Aug 12, 2020
Samtec has been selling its products online since the early 2000s, the very early days of eCommerce. We’ve been through a couple of shopping cart iterations since then. Before this recent upgrade, Samtec.com had been running on a cart system that was built in 2011. It w...
Aug 11, 2020
Making a person appear to say or do something they did not actually say or do has the potential to take the war of disinformation to a whole new level....
Aug 7, 2020
[From the last episode: We looked at activation and what they'€™re for.] We'€™ve talked about the structure of machine-learning (ML) models and much of the hardware and math needed to do ML work. But there are some practical considerations that mean we may not directly us...

Featured Video

Product Update: New DesignWare USB4 IP Solution

Sponsored by Synopsys

Are you ready for USB4? Join Gervais Fong and Eric Huang to learn more about this new 40Gbps standard and Synopsys DesignWare IP that helps bring your USB4-enabled SoC to market faster.

Click here for more information about DesignWare USB4 IP

Featured Paper

Computational Software: 4 Ways It is Transforming System Design & Hardware Design

Sponsored by BestTech Views

Cadence President Anirudh Devgan shares his detailed insights on Computational Software. Anirudh provides a clear definition of computational software, and four specific ways computational software is transforming system design & hardware design -- including highly distributed compute, reduced memory footprints, co-optimization, and machine learning applications.

Click here for the white paper.

Featured Chalk Talk

Mindi Analog Simulator

Sponsored by Mouser Electronics and Microchip

It’s easy to go wrong in the analog portion of your design, particularly if you’re not an analog “expert.” Electrical simulation can help reduce risk and design re-spins. In this episode of Chalk Talk, Amelia Dalton chats with Rico Brooks of Microchip about the MPLAB Mindi tool, and how it can help reduce your design risk.

Click here for more information about MINDI Analog Simulator.