Search Results for author: Giulia Denevi

Found 8 papers, 4 papers with code

Conditional Meta-Learning of Linear Representations

no code implementations30 Mar 2021 Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto

Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.

Meta-Learning Representation Learning

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

1 code implementation NeurIPS 2020 Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto

However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks’ distribution cannot be captured by a single meta- parameter vector.

Meta-Learning

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning

no code implementations25 Aug 2020 Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto

However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks' distribution cannot be captured by a single meta-parameter vector.

Meta-Learning

Online Parameter-Free Learning of Multiple Low Variance Tasks

1 code implementation11 Jul 2020 Giulia Denevi, Dimitris Stamos, Massimiliano Pontil

We propose a method to learn a common bias vector for a growing sequence of low-variance tasks.

Meta-Learning Multi-Task Learning

Online-Within-Online Meta-Learning

1 code implementation NeurIPS 2019 Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, Massimiliano Pontil

We study the problem of learning a series of tasks in a fully online Meta-Learning setting.

Meta-Learning

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization

1 code implementation25 Mar 2019 Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, Massimiliano Pontil

We study the problem of learning-to-learn: inferring a learning algorithm that works well on tasks sampled from an unknown distribution.

Learning To Learn Around A Common Mean

no code implementations NeurIPS 2018 Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil

We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta- algorithm to efficiently solve it.

Meta-Learning

Incremental Learning-to-Learn with Statistical Guarantees

no code implementations21 Mar 2018 Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil

In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution.

Incremental Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.