Search Results for author: Dimitris Stamos

Found 11 papers, 2 papers with code

Online Parameter-Free Learning of Multiple Low Variance Tasks

1 code implementation11 Jul 2020 Giulia Denevi, Dimitris Stamos, Massimiliano Pontil

We propose a method to learn a common bias vector for a growing sequence of low-variance tasks.

Meta-Learning Multi-Task Learning

Online-Within-Online Meta-Learning

1 code implementation NeurIPS 2019 Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, Massimiliano Pontil

We study the problem of learning a series of tasks in a fully online Meta-Learning setting.

Meta-Learning

Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction

no code implementations2 Mar 2019 Giulia Luise, Dimitris Stamos, Massimiliano Pontil, Carlo Ciliberto

We study the interplay between surrogate methods for structured prediction and techniques from multitask learning designed to leverage relationships between surrogate outputs.

Structured Prediction

Learning To Learn Around A Common Mean

no code implementations NeurIPS 2018 Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil

We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta- algorithm to efficiently solve it.

Meta-Learning

Incremental Learning-to-Learn with Statistical Guarantees

no code implementations21 Mar 2018 Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil

In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution.

Incremental Learning regression

Reexamining Low Rank Matrix Factorization for Trace Norm Regularization

no code implementations27 Jun 2017 Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil

A standard optimization strategy is based on formulating the problem as one of low rank matrix factorization which, however, leads to a non-convex problem.

Matrix Completion

Fitting Spectral Decay with the $k$-Support Norm

no code implementations4 Jan 2016 Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos

The spectral $k$-support norm enjoys good estimation properties in low rank matrix learning problems, empirically outperforming the trace norm.

Matrix Completion

New Perspectives on $k$-Support and Cluster Norms

no code implementations27 Dec 2015 Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos

We note that the spectral box-norm is essentially equivalent to the cluster norm, a multitask learning regularizer introduced by [Jacob et al. 2009a], and which in turn can be interpreted as a perturbation of the spectral k-support norm.

Matrix Completion

Learning With Dataset Bias in Latent Subcategory Models

no code implementations CVPR 2015 Dimitris Stamos, Samuele Martelli, Moin Nabi, Andrew McDonald, Vittorio Murino, Massimiliano Pontil

However, previous work has highlighted the possible danger of simply training a model from the combined datasets, due to the presence of bias.

New Perspectives on k-Support and Cluster Norms

no code implementations6 Mar 2014 Andrew M. McDonald, Massimiliano Pontil, Dimitris Stamos

We further extend the $k$-support norm to matrices, and we observe that it is a special case of the matrix cluster norm.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.