Event Details

Integer Only Quantization and Pruning of Transformers

Date: 12/17/2021 3:22 pm
Track:
Lightning Talks

Organization: UC Berkeley
Speakers: Amir Gholami, Sehoon Kim

Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose two approaches to address this: I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic, as well as LTP, a learnable token pruning method. Our direction evaluation shows that we can achieve up to 4x speed up as compared to baseline.

Register for TVMCon 2021