no code implementations • 30 Mar 2021 • Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
1 code implementation • NeurIPS 2020 • Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto
However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks’ distribution cannot be captured by a single meta- parameter vector.
no code implementations • 25 Aug 2020 • Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto
However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks' distribution cannot be captured by a single meta-parameter vector.
1 code implementation • 11 Jul 2020 • Giulia Denevi, Dimitris Stamos, Massimiliano Pontil
We propose a method to learn a common bias vector for a growing sequence of low-variance tasks.
1 code implementation • NeurIPS 2019 • Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, Massimiliano Pontil
We study the problem of learning a series of tasks in a fully online Meta-Learning setting.
1 code implementation • 25 Mar 2019 • Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, Massimiliano Pontil
We study the problem of learning-to-learn: inferring a learning algorithm that works well on tasks sampled from an unknown distribution.
no code implementations • NeurIPS 2018 • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta- algorithm to efficiently solve it.
no code implementations • 21 Mar 2018 • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution.