1 code implementation • 11 Jul 2020 • Giulia Denevi, Dimitris Stamos, Massimiliano Pontil
We propose a method to learn a common bias vector for a growing sequence of low-variance tasks.
1 code implementation • NeurIPS 2019 • Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, Massimiliano Pontil
We study the problem of learning a series of tasks in a fully online Meta-Learning setting.
no code implementations • 2 Mar 2019 • Giulia Luise, Dimitris Stamos, Massimiliano Pontil, Carlo Ciliberto
We study the interplay between surrogate methods for structured prediction and techniques from multitask learning designed to leverage relationships between surrogate outputs.
no code implementations • NeurIPS 2018 • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta- algorithm to efficiently solve it.
no code implementations • 21 Mar 2018 • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution.
no code implementations • 27 Jun 2017 • Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
A standard optimization strategy is based on formulating the problem as one of low rank matrix factorization which, however, leads to a non-convex problem.
no code implementations • 4 Jan 2016 • Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos
The spectral $k$-support norm enjoys good estimation properties in low rank matrix learning problems, empirically outperforming the trace norm.
no code implementations • 27 Dec 2015 • Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos
We note that the spectral box-norm is essentially equivalent to the cluster norm, a multitask learning regularizer introduced by [Jacob et al. 2009a], and which in turn can be interpreted as a perturbation of the spectral k-support norm.
no code implementations • CVPR 2015 • Dimitris Stamos, Samuele Martelli, Moin Nabi, Andrew McDonald, Vittorio Murino, Massimiliano Pontil
However, previous work has highlighted the possible danger of simply training a model from the combined datasets, due to the presence of bias.
no code implementations • NeurIPS 2014 • Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos
The $k$-support norm has successfully been applied to sparse vector prediction problems.
no code implementations • 6 Mar 2014 • Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos
We further extend the $k$-support norm to matrices, and we observe that it is a special case of the matrix cluster norm.