industry news
Subscribe Now

MLPerf Releases Over 500 Inference Benchmark Results, Showcasing a Wide Range of Machine Learning Solutions

Mountain View, CA – November 6, 2019 – After introducing the first industry-standard inference benchmarks in June of 2019, today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. These benchmarks measure how quickly a trained neural network can process new data for a wide range of applications (autonomous driving, natural language processing, and many more) on a variety of form factors (IoT devices, smartphones, PCs, servers and a variety of cloud solutions). The results of the benchmarks are available on the MLPerf website at https://mlperf.org/

“All released results have been validated by the audits we conducted,” stated Guenther Schmuelling, MLPerf Inference Working Group Co-chair from Microsoft. “We were very impressed with the quality of the results. This is an amazing number of submissions in such a short time since we released these benchmarks this summer. It shows that inference is a growing and important application area, and we expect many more submissions in the months ahead.” 

“Companies are embracing these benchmark tests to provide their customers with an objective way to measure and compare the performance of their machine learning solutions,” stated Carole-Jean Wu, Inference Working Group Co-chair from Facebook. “There are many cost- performance tradeoffs involved in inference applications. These results will be invaluable for companies evaluating different solutions.” 

Of the over 500 benchmark results released today, 182 are in the Closed Division intended for direct comparison of systems. The results span 44 different systems. The benchmarks show a 5-order-of-magnitude difference in performance and a 3-order-of-magnitude range in estimated power consumption and range from embedded devices and smartphones to large-scale data center systems. The remaining 429 open results are in the Open Division and show a more diverse range of models, including low precision implementations and alternative models. 

Companies in China, Israel, Korea, the United Kingdom, and the United States submitted benchmark results. These companies include: Alibaba, Centaur Technology, Dell EMC, dividiti, FuriosaAI, Google, Habana Labs, Hailo, Inspur, Intel, NVIDIA, Polytechnic University of Milan, Qualcomm, and Tencent. 

Future versions of MLPerf will include additional benchmarks such as speech-to-text and recommendation, and additional metrics such as power consumption. MLPerf is also developing a smartphone app that runs inference benchmarks for use with future versions. “We are actively soliciting help from all our members and the broader community to make MLPerf better,” stated Vijay Janapa Reddi, Associate Professor, Harvard University, and MLPerf Inference Working Group Co-chair. 

“Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning 

performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products. 

Additional information about these benchmarks are available at https://mlperf.org/inference- overview/. The MLPerf Inference Benchmark whitepaper is available at https://edge.seas.harvard.edu/files/edge/files/mlperf_inference.pdf. The MLPerf Training Benchmark whitepaper is available at https://arxiv.org/abs/1910.01500

About 

MLPerf’s mission is to build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services. MLPerf was founded in February, 2018 as a collaboration of companies and researchers from educational institutions. MLPerf is presently led by volunteer working group chairs. MLPerf could not exist without open source code and publicly available datasets others have generously contributed to the community.

Leave a Reply

featured blogs
Apr 1, 2020
Commercial Off-the-Shelf (COTS) products are making their way into industries that wouldn’t have considered them previously. This is mainly due to the flexibility, speed of delivery, and cost savings that COTS products offer when compared to a full MIL-SPEC product. How...
Mar 31, 2020
"It is sometimes difficult to determine if quotes found on the Internet are genuine or not" (Abraham Lincoln)....
Mar 30, 2020
An improvement to BIST improves test coverage and time to improve functional safety of automotive ICs The growth of electronics in automobiles has spurred significant innovation in the development of advanced safety mechanisms for all the electrical and electronic systems in ...
Mar 27, 2020
[From the last episode: We saw how pointers are an important kind of variable, representing data whose location we can'€™t predict in advance.] We saw last time that pointers are used to store the addresses of data stored in memory space that'€™s allocated while the progr...

Featured Video

Automotive Trends Driving New SoC Architectures -- Synopsys

Sponsored by Synopsys

Today’s automotive trends are driving new design requirements for automotive SoCs targeting ADAS, gateways, connected cars and infotainment. Find out why it is essential to use pre-designed, pre-verified, reusable automotive-optimized IP to meet such new requirements and accelerate design time.

Drive Your Next Design to Completion Today with DesignWare IP® for Automotive SoCs