Event Details

Automating and Simplifying MLPerf Inference Benchmark Submissions with TVM and CK

Date: 12/15/2021 2:00 pm
Track:
User Tutorial

Organization: OctoML
Speakers: Grigori Fursin, Thomas Zhu, Alexander Peskov

MLPerf is a community effort to develop a common Machine Learning (ML) benchmark that provides consistent, reproducible, and fair measurements of accuracy, speed, and efficiency across diverse ML models, datasets, hardware, and software: https://mlcommons.org. While MLPerf popularity is steadily growing, the barrier of entry to MLPerf remains high due to a complex benchmarking and submission pipeline and rapidly evolving hardware and software stacks. In this tutorial, we will demonstrate how Apache TVM and the Collective Knowledge (CK) framework can simplify and automate MLPerf inference benchmarking based on our own MLPerf v1.1 submission. Our goal is to explain how our flexible open-source stack can lower the barrier of entry for future hardware participants by providing a powerful framework that is both hardware and ML framework agnostic and can optimize almost any deep learning model on almost any deployment hardware.

Register for TVMCon 2021