With TVM, Machine Learning can be run everywhere. For ML Engineers who operate ML models in production this poses an interesting challenge – how do we monitor model, sensor, and data health on the edge? This talk will present an open source library, whylogs, that can be used to instrument ML inference at the edge and discuss how monitoring can be implemented for ML models deployed to a fleet of devices.