Keynote Speaker

Keynote: Yee Whye Teh

Yee Whye Teh

Professor,Department of Statistics, University of Oxford
Research Scientist, DeepMind

On Big Data Learning for Small Data Problems

Much recent progress in machine learning have been fuelled by the explosive growth in the amount and diversity of data available, and the computational resources needed to crunch through the data. This begs the question of whether machine learning systems necessarily need large amounts of data to solve a task well. An exciting recent development, under the banners of meta-learning, lifelong learning, learning to learn, multitask learning etc, has been the observation that often there is heterogeneity within the data sets at hand, and in fact a large data set can be viewed more productively as many smaller data sets, each pertaining to a different task. For example, in recommender systems each user can be said to be a different task with a small associated data set, and in AI one holy grail is how to develop systems that can learn to solve new tasks quickly from small amounts of data.

In such settings, the problem is then how to “learn to learn quickly”, by making use of similarities among tasks. One perspective for how this is achievable is that exposure to lots of previous tasks allows the system to learn a rich prior knowledge about the world in which tasks are sampled from, and it is with rich world knowledge that the system is able to solve new tasks quickly. This is a very active, vibrant and diverse area of research, with many different approaches proposed recently. In this talk I will describe a view of this problem from probabilistic and deep learning perspectives, and describe a number of efforts in this direction that I have recently been involved in.


I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. I obtained my Ph.D. at the University of Toronto (working with Geoffrey Hinton), and did postdoctoral work at the University of California at Berkeley (with Michael Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). I was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL, and a tutorial fellow at University College Oxford, prior to my current appointment. I was programme co-chair of ICML 2017 and AISTATS 2010, and gave the Breiman Lecture at NIPS 2017. I am interested in the statistical and computational foundations of intelligence, and works on scalable machine learning, probabilistic models, Bayesian nonparametrics and deep learning.

Save the Date

KDD 2018 - London, United Kingdom. 19 - 23 August 2018

The annual KDD conference is the premier interdisciplinary conference bringing together researchers and practitioners from data science, data mining, knowledge discovery, large-scale data analytics, and big data.

Stay Informed