This hands-on tutorial will work through the pipeline of developing, training and deploying deep learning applications by using MXNet. Multiple applications including recommendation, word embedding will be covered. The participants will learn how to write a deep learning program in a few lines of codes in their favorite language such as Python, Scala, and R and train it on one or multiple GPUs. They will also learn how to deploy a deep learning application in the cloud or in the mobile phones.


Mu Li is currently a final year Ph.D. student at Carnegie Mellon University. His research interests lie in algorithms and systems for distributed machine learning and deep
learning. In particular, he designs algorithms and systems scaling to petabyte datasets and running over thousands of machines. He has co-authored tens of top journal and conference papers ranging from learning theory to machine learning, from data mining to systems.
Besides, He has served as principal architect at Baidu and co-founded several machine learning startups.

Tianqi is a third year Ph.D. at University of Washington, working on large scale machine learning. He has co-authored many important works in scalable learning systems, statistical sampling theory and deep learning . He also designed several widely used scalable learning systems, including XGBoost and MXNet.