Sparse and Ragged Tensor Algebra are becoming more and more important in Deep Learning models for Graphs/Proteins/etc. However, existing Deep Learning Compilers either fail to support these workloads or cannot fully make use of existing hardwares. In this talk we present an extension to current TVM Tensor IR: Sparse TIR, which supports auto-tunning and is compatible with TIR infrastructure, we show that Sparse TIR can accelerate common sparse workloads and help researchers design more hardware-efficient algorithms.