tinyML processors are different from their cloud counterparts because they operate on smaller neural network models with small batch sizes. In this scenario, tiny AI accelerators tightly integrated with processor can provide better resource utilization and energy-efficiency than coarse-grained accelerators communicating with the processor over a system bus. Tight integration also allows the tinyML processor to execute both AI and non-AI workloads on the same processor. Hence, we have developed the AI-RISC family of custom RISC-V processors with tightly integrated AI accelerators and ISA extensions which directly target these accelerators. In this talk, we will present the challenges, solutions and lessons learned while adopting TVM as the front-end compiler for AI-RISC. AI-RISC follows a 2-step compilation strategy where TVM is used as the front-end compiler while a custom C-compiler generated using Synopsys ASIP Designer is used as the back-end along with complete SDK generation.