The upshot: MLPerf has announced inference benchmarks for neural networks, along with initial results.
Congratulations! You now have the unenviable task of deciding which neural-network (NN) inference engine to use in your application. You want, of course, the fastest one. And it needs to run at the edge on a battery-powered device. All you have to do is compare all of the options out there to see which works best!
But here’s the thing: there’s an enormous amount of variability … Read More → "Neural-Net Inference Benchmarks"