In recent years, there has been a surge of interest in meta-learning algorithms: algorithms that optimize the performance of learning algorithms, algorithms that design learning functions like neural neural networks based on data, and algorithms that discover the relationships between tasks to enable fast learning of novel tasks. This represents the next major transition in artificial intelligence from learning decision functions and learning representations to algorithms that learn to learn representations and decision functions.
This tutorial will cover several important topics in meta-learning, including few-shot learning, multi-task learning, and neural architecture search, along with their basic building blocks: reinforcement learning, evolutionary algorithms, optimization, and gradient-based learning. We will also touch upon their applications across a range of problems in computer vision.
09:00 - 09:10 Introduction - Nikhil Naik
09:10 - 09:55 Few-shot meta-learning - Chelsea Finn (slides)
09:55 - 10:40 Multi-task learning and meta-learning - Nitish Keskar (slides)
10:40 - 11:00 Coffee Break
11:00 - 11:45 Neural architecture search - Nikhil Naik (slides)
11:45 - 12:30 Bayesian optimization and meta-learning - Frank Hutter (slides)
Contact: Nikhil Naik