Event Details

Support sparse and irregular workloads in TIR

Date: 12/17/2021 3:10 pm
Track:
Lightning Talks

Organization: University of Washington
Speakers: Zihao Ye, Ruihang Lai

Sparse and Ragged Tensor Algebra are becoming more and more important in Deep Learning models for Graphs/Proteins/etc. However, existing Deep Learning Compilers either fail to support these workloads or cannot fully make use of existing hardwares. In this talk we present an extension to current TVM Tensor IR: Sparse TIR, which supports auto-tunning and is compatible with TIR infrastructure, we show that Sparse TIR can accelerate common sparse workloads and help researchers design more hardware-efficient algorithms.

Register for TVMCon 2021