The upshot: MLPerf has announced inference benchmarks for neural networks, along with initial results.
Congratulations! You now have the unenviable task of deciding which neural-network (NN) inference engine to use in your application. You want, of course, the fastest one. And it needs to run at the edge on a battery-powered device. All you have to do is … Read More → "Neural-Net Inference Benchmarks"
We’ve looked at a number of architectures for accelerating neural-network inference calculations before. As an example, those we saw from Hot Chips were big, beefy processing units best targeted at cloud-based inference. But, as we’ve mentioned, there is lots of energy going … Read More → "Graph-Based AI Accelerators"
“I’d love to be incredibly wealthy for no reason at all.” – Johnny Rotten
Among sports car aficionados, a “Super Seven” is a 1960s-era Lotus: light, fast, nimble, and characteristically fragile. Marvel superhero Wolverine drives one; the unnamed protagonist in The Prisoner famously had one, too.
< … Read More → "Nuvia: Designed for the One Percenters"
The upshot: Mindtech provides a capability for creating fully annotated synthetic training images to complement real images for improved AI training.
We’ve spent a lot of time looking at AI training and AI inference and the architectures and processes used for each of those. Where the AI task involves images, we’ve blithely referred to the need … Read More → "Synthetic Images for AI Training"
The upshot: Memories can be arranged such that an “access” becomes a multiply-accumulate function. Storing weights in the memory and using activations as inputs saves data movement and power. And there are multiple ways to do this using RRAM, flash, and SRAM – and then there’s an approach involving DRAM, but it’s completely different.
In the scramble … Read More → "In-Memory Computing"