TVM Conference 2018 Program
TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends.
We are excited to hold a conference on the state of the art of deep learning compilation optimization. We welcome TVM contributors, potential users, UW SAMPL sponsors, collaborators and researchers and practitioners from the broader community. The conference will discuss recent advances in frameworks, compilers, systems and architecture support, security, training and hardware acceleration.
Please also check out the latest TVM conference
Program
Presentation slides and video recordings are now available. Checkout the program to learn more about exciting industrial use cases and new research directions.
Time | |
---|---|
9:00 | Keynote – SAMPL, Apple, Amazon, Huawei. [Video] [Slides] |
10:15 | TVM Stack Overview – Tianqi Chen, UW. [Video] [Slides] |
10:45 | Deep Learning Compilation at Amazon – Yida Wang, Amazon. [Video] [Slides] |
11:05 | break |
11:25 | AutoTVM & Device Fleet – Eddie Yan, UW. [Video] [Slides] |
11:45 | VTA Open & Flexible Deep Learning Accelerator – Thierry Moreau, UW. [Video] [Slides] |
12:05 | Fast & Faster Privacy-Preserving ML in Secure Hardware Enclaves – Nick Hynes, UC Berkeley/Oasis Labs. [Video] [Slides] |
12:20 | Lunch (boxed lunches will be provided) |
13:30 | Spatial: A Language and Compiler for Application Accelerators – Kunle Olukotun/Raghu Prabhakar, Stanford & SambaNova. [Video] [Slides] |
13:50 | Machine Programming – Justin Gottschlich, Intel. [Video] [Slides] |
14:10 | PlaidML Stripe: Polyhedral IR + Model-guided Optimization – Brian Retford, Intel. [Video] [Slides] |
14:25 | Relay: a high level differentiable IR – Jared Roesch, UW. [Video] [Slides] |
14:45 | Scalable Distributed Training with Parameter Hub: A Whirlwind Tour – Liang Luo, UW. [Video] [Slides] |
15:05 | The HammerBlade: An ML-Optimized Supercomputer for ML and Graphs – Michael Taylor, UW. [Video] [Slides] |
15:20 | break, contributors meetup |
15:50 | TVM @ FB – Andrew Tulloch, Facebook. [Video] [Slides] |
16:10 | Inference Architectures @ Xilinx – Graham Schelle, Xilinx. [Video] [Slides] |
16:30 | Lightning talks session |
Efficient Voice Activity Detection via Binarized Neural Networks – Matthai Philipose, Microsoft. [Video] [Slides] | |
Heterogenous Bitwidth Binarization: Weird Operators with Big Benefits – Josh Fromm, UW. [Video] [Slides] | |
Generating Fast Operators for Binarizable Networks – Meghan Cowan, UW. [Video] [Slides] | |
OpenCL Backend for FPGA – Morita Kazutaka, NTT, Japan. [Video] [Slides] | |
Build Your Own VTA Design with Chisel – Luis Vega, UW. [Video] [Slides] | |
µTVM: Deep Learning on Bare-Metal Devices – Pratyush Patel, UW. [Video] [Slides] | |
Supporting TVM on RISC-V Architectures – Jenq-Kuen Lee, NTHU, Taiwan. [Video] [Slides] | |
Bring Your Own Datatypes – Gus Smith, UW. [Video] [Slides] | |
AutoScheduler for TVM – Lianmin Zheng, SJTU. [Video][Slides] | |
Hybrid Script: A Text Format for Halide IR & A Python-TVM Hybrid Frontend – Jian Weng, UCLA. [Video] [Slides] | |
Automatic Quantization for TVM – Ziheng Jiang, UW. [Video][Slides] | |
Data Visualization with Vega-Lite and Altair – Dominik Moritz , UW. [Video][Slides] | |
TVM on Hexagon DSP – Krzysztof Parzyszek , Qualcomm. [Video][Slides] | |
Sharing, Protection, and Compatibility for Reconfigurable Fabric with AmorphOS – Ahmed Khawaja, UT Austin. [Video] [Slides] | |
17:35 | TVM & the Apache Software Foundation – Markus Weimer, Microsoft and Apache Software Foundation. [Slides] |
18:15 to 20:00 | Social (drinks, food) |
Follow us on Twitter
Hotels
Here is a list of hotels that are close to the UW campus.
- Watertown Hotel
- (206) 826-4242
- 4242 Roosevelt Way NE, Seattle, WA 98105
- Residence Inn by Marriott - University District
- (206) 322-8887
- 4501 12th Avenue NE, Seattle, Washington 98105 USA
- University Inn
- (206) 632-5055
- 4140 Roosevelt Way NE, Seattle, WA 98105