Deep neural network representations play an important role in computer vision, speech, computational linguistics, robotics, reinforcement learning and many other data-rich domains. In this talk I will show that learning-to-learn and compositionality are key ingredients for dealing with knowledge transfer so as to solve a wide range of tasks, for dealing with small-data regimes, and for continual learning. I will demonstrate this with three examples: learning learning algorithms, neural programmers and interpreters, and learning communication.

Nando de Freitas is a machine learning professor at Oxford University and a senior staff research scientist at Google DeepMind. He is a fellow of the Canadian Institute For Advanced Research (CIFAR) in the successful Neural Computation and Adaptive Perception program, and an action editor for the Journal of Machine Learning Research.
He received his PhD from Trinity College, Cambridge University in 2000 on Bayesian methods for neural networks. From 1999 to 2001, he was a postdoctoral fellow at UC Berkeley in the artificial intelligence group of Stuart Russell. He became a professor at the University of British Columbia from 2001 to 2014. Nando has spun-off a few technology companies, and received several awards, including a few best paper awards, the Charles A. McDowell Award for Excellence in Research, and the Mathematics of Information Technology and Complex Systems Young Researcher Award.