This is the development version of the open CodeReef portal - please provide your feedback and request features using Slack, GitHub or email!
MLPerf Inference Benchmark
Component: cr-lib:d0e50ebb5b9d4ec9 (v1.8.0)
Added by: open-research-aggregator (2020-01-05 14:18:46)
Authors: MLPerf
License: github.com/mlperf/policies/blob/master/TERMS%20OF%20USE.md
Source: mlperf.org
Creation date: 2019-11-07 11:43:59
CID: 6bc775410a855d0b:d0e50ebb5b9d4ec9cr-lib:d0e50ebb5b9d4ec9  )

Sign up here to be notified when new results are reproduced or new CodeReef components are shared!
Authors: Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, Ramesh Chukka, Cody Coleman, Sam Davis, Pan Deng, Greg Diamos, Jared Duke, Dave Fick, J. Scott Gardner, Itay Hubara, Sachin Idgunji, Thomas B. Jablin, Jeff Jiao, Tom St. John, Pankaj Kanwar, David Lee, Jeffery Liao, Anton Lokhmotov, Francisco Massa, Peng Meng, Paulius Micikevicius, Colin Osborne, Gennady Pekhimenko, Arun Tejusve Raghunath Rajan, Dilip Sequeira, Ashish Sirasao, Fei Sun, Hanlin Tang, Michael Thomson, Frank Wei, Ephrem Wu, Lingjie Xu, Koichi Yamada, Bing Yu, George Yuan, Aaron Zhong, Peizhao Zhang, Yuchen Zhou
Artifact in the open CK+CodeReef format: Link to the development version
Where published: MLPerf inference v0.5
ArXiv: 1911.02549
Document: PDF
Artifact before CodeReefication:
CodeReefied portable workflows:
Reproduced results: CK format
Standard reproducibility and reusability badges:
  •    ● Portable workflow framework used
Methodology to reproduce results: ACM
Results dashboard: Link
Codereef dashboards with related results:

Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and four orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf implements a set of rules and practices to ensure comparability across systems with wildly differing architectures. In this paper, we present the method and design principles of the initial MLPerf Inference release. The first call for submissions garnered more than 600 inference-performance measurements from 14 organizations, representing over 30 systems that show a range of capabilities.


All versions:


All files (click to download):


Comments:

    Please log in to add your comment!


If you notice inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!