People like using PyTorch because of its very flexible eager mode. But, this has often made it difficult to use compilers in PyTorch, particularly for training. We present a new extension point that allows developers to take a compiler that works for inference, and with minimal effort, apply it to PyTorch models in training. For example, we can take the existing Torchscript to TVM integration and re-use it for optimizing subgraphs in training. Our approach also makes many training optimizations from the literature easy to implement, and we’ll provide several examples of how this extension point can be used to easily speed up PyTorch models.