feature article
Subscribe Now

PowerVR AX2185 Accelerates Neural Nets

New IP Cores Aimed at Smartphones, Cameras, Consumer Goods

“The greatest danger of AI is that people conclude too early that they understand it.” — Eliezer Yudkowsky

Sometimes accidental discoveries are the best ones. Teflon was supposed to be a refrigerant. The first heart pacemaker was designed as a measuring device, but inventor Wilson Greatbatch put in the wrong resistor value. And Play-Doh was created to clean wallpaper.

Turns out, the graphics card in your PC is surprisingly good – almost accidentally talented – at neural-net processing, cryptocurrency mining, machine learning, and artificial intelligence. Who knew? With a few tweaks, your GPU can make a darned good robot brain, even outsmarting the “real” microprocessor in your system.

This bit of serendipity hasn’t gone unnoticed by the world’s GPU designers, of course. Never ones to let moss grow under their feet, companies like nVidia, AMD/ATI, and Imagination Technologies have rapidly pivoted their GPU architectures to capitalize on these new and interesting markets. Now you can buy GPUs to use as, well, GPUs, or you can buy them for completely different purposes.

This week’s announcement comes from Imagination, the keepers of the PowerVR flame. They’ve split their popular GPU family into two completely different architectures, one for traditional graphics tasks and another for accelerating neural networks. They both share the PowerVR brand name, but that’s about all they have in common.  

The first broad outlines of this came last year, when Imagination announced its PowerVR 2NX architecture. So, we knew that Imagination had an accelerator in the works. Now we know their names and what they look like.

Say hello to AX2145 and AX2185. They’re the first two instantiations of the new 2NX architecture, and they’re pretty similar. One’s designed for maximum performance (the ’85), while the other is a milder, more “balanced” design, according to the company.

Both IP cores are available immediately, and both are, in fact, already being used by a pair of lead customers. Expect to see the first AX21x5-based products on the street in about a year, probably in the form of high-end Chinese smartphones, security cameras, or drones.

Broadly speaking, what separates the AX21xx twins from other AI-focused designs from the major semiconductor vendors is power efficiency. Your nVidia GPU is never going to last long inside a cellphone, so designing for ultimate performance isn’t the goal. Instead, Imagination has to bear in mind that its customers are running on batteries, in confined spaces, and with no good way to cool the hardware. They’re focused on the Internet of Surveillance™, not Call of Duty II.

The AX2185 is the faster of the two designs, and evidently the fastest implementation on the PowerVR roadmap. Imagination says it can perform at 4.1 TOPS (trillions of operations per second), which neatly matches the top end of the family’s performance range when it was announced last year. If that kind of acceleration somehow isn’t good enough for you, you need multiple AX2185s.

The AX2145 is the little sister, with 1.0 TOPS performance, smaller die area, and less power consumption. Imagination sees this as a good fit for midrange smartphones (midrange in a few years, perhaps), digital TVs, and set-top boxes.

Weirdly, Imagination claims that the ’45 actually outperforms the ’85 in certain circumstances. Specifically, when memory bandwidth is tight, you’ll want to use the ’45, not its bigger sibling. That’s because both designs – like all neural-net accelerators – need a boatload of bandwidth to operate efficiently. Like GPUs and DSPs, NNAs are memory hogs, and throttling that memory can have a big effect on the engine’s efficiency. Imagination spent a lot of time benchmarking, and later explaining, why this situation is so.

It also explains why the whole 2NX architecture supports funny bit widths. As our own Bryon Moyer explained back in October, the new PowerVR family is fixed-point only, and supports 8-bit and 4-bit integers, as well as some nontraditional bit sizes, like 5-bit format. Add to that 12-bit, 7-bit, 6-bit, and other integers and you begin to see how hard Imagination worked to preserve memory size and bandwidth.

You can even tweak the bit depth on a layer-by-layer basis, adding extra precision where it’s needed and discarding it where it isn’t. You can also use different formats for weights and for data. It’s all flexible.

In bandwidth-constrained applications, the high-end AX2185 chokes if it’s not fed fast enough, wasting most of that potential performance. This is where the AX2145 outpaces it by as much as 50%, according to the company’s benchmarks.

How bad does your bandwidth have to be for this performance inversion to take place? YMMV, but Imagination hints that a system with bandwidth in the “low single digits” of Gbytes/sec would favor the smaller ’45 over the larger ’85 variant. Conversely, if you can provide “dozens of gigabytes per second” of bandwidth, you’ll be happier with the AX2185.

NNAs like these allow designers to stick more intelligence into end nodes, like security cameras that do their own object recognition. That’s great if you’re a camera designer, because you can charge more for your “smart” camera compared to the dumb ones you sold last year. It’s also good news for the overall system, because now you’re not piping full-resolution, full-rate video down an Ethernet cable to a waiting computer, which then has to analyze all those pixels in real-time on its Intel, nVidia, or AMD processor (a task for which they are ill-suited, I can tell you). The whole system gets smarter, network bandwidth is reduced drastically, power consumption probably goes down, and the chance of someone intercepting your raw video stream pretty much disappears. Everybody wins.

NNAs are like the DSPs of the 1990s: everybody needs one but nobody’s sure how to program them. Every few years, a new DSP application would materialize that promised to catapult DSP chips and software into the mainstream. Modems! Voice recognition! Graphics! Machine vision! Each time, widespread adoption proved elusive and DSPs remained a niche product, ideally suited to frustratingly narrow application areas.

At first, GPUs looked set to follow the same path. Way too many GPU startups tried and failed to break into the mainstream. Only a few, including PowerVR, nVidia, and ATI (now AMD) survived the initial wave of optimism. Like restaurants, GPU companies tend to fail within the first 18–24 months.

Machine learning, artificial intelligence, and convolutional neural networks fell on the GPU industry like manna from heaven. Suddenly, a whole new application area dropped in their laps, ready-made, and nicely suited to existing chips. It’s like discovering that you can sell your floor wax as a dessert topping, too. But that doesn’t mean you can’t improve on the flavor, and so now the second generation of NNAs is emerging from their GPU progenitors. Existing GPUs were handy, plentiful, and affordable, but not quite perfect. Neural nets may have come as a surprise to the industry, but today’s NNA designers are wasting no time in capitalizing on the opportunity.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advances in Solar Energy and Battery Technology
Sponsored by Mouser Electronics and onsemi
Passive components will play an important part in the next generation of solar and energy storage systems. In this episode of Chalk Talk, Amelia Dalton, Prasad Paruchuri from onsemi, Walter Fusto from Würth Elektronik explore trends, challenges and solutions in solar and energy storage systems. They also examine EMI considerations for energy storage systems, the benefits that battery management systems bring to these kinds of designs and how passive components can make all the difference in solar and energy storage systems.
Aug 13, 2024
54,618 views