Search Results for author: Pierre Laforgue

Found 14 papers, 2 papers with code

Multitask Online Learning: Listen to the Neighborhood Buzz

no code implementations26 Oct 2023 Juliette Achddou, Nicolò Cesa-Bianchi, Pierre Laforgue

We study multitask online learning in a setting where agents can only exchange information with their neighbors on an arbitrary communication network.

Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels

no code implementations20 Feb 2023 Tamim El Ahmad, Luc Brogat-Motte, Pierre Laforgue, Florence d'Alché-Buc

Surrogate kernel-based methods offer a flexible solution to structured output prediction by leveraging the kernel trick in both input and output spaces.

Structured Prediction

Linear Bandits with Memory: from Rotting to Rising

no code implementations16 Feb 2023 Giulia Clerici, Pierre Laforgue, Nicolò Cesa-Bianchi

By choosing the cycle length so as to trade-off approximation and estimation errors, we then prove a bound of order $\sqrt{d}\,(m+1)^{\frac{1}{2}+\max\{\gamma, 0\}}\, T^{3/4}$ (ignoring log factors) on the regret against the optimal sequence of actions, where $T$ is the horizon and $d$ is the dimension of the linear action space.

Decision Making Model Selection

On Medians of (Randomized) Pairwise Means

no code implementations1 Nov 2022 Pierre Laforgue, Stephan Clémençon, Patrice Bertail

Tournament procedures, recently introduced in Lugosi & Mendelson (2016), offer an appealing alternative, from a theoretical perspective at least, to the principle of Empirical Risk Minimization in machine learning.

Metric Learning

Fast Kernel Methods for Generic Lipschitz Losses via $p$-Sparsified Sketches

1 code implementation8 Jun 2022 Tamim El Ahmad, Pierre Laforgue, Florence d'Alché-Buc

Kernel methods are learning algorithms that enjoy solid theoretical foundations while suffering from important computational limitations.

regression valid

AdaTask: Adaptive Multitask Online Learning

no code implementations31 May 2022 Pierre Laforgue, Andrea Della Vecchia, Nicolò Cesa-Bianchi, Lorenzo Rosasco

We introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown structure of the tasks.

A Last Switch Dependent Analysis of Satiation and Seasonality in Bandits

1 code implementation22 Oct 2021 Pierre Laforgue, Giulia Clerici, Nicolò Cesa-Bianchi, Ran Gilad-Bachrach

Motivated by the fact that humans like some level of unpredictability or novelty, and might therefore get quickly bored when interacting with a stationary policy, we introduce a novel non-stationary bandit problem, where the expected reward of an arm is fully determined by the time elapsed since the arm last took part in a switch of actions.

Fighting Selection Bias in Statistical Learning: Application to Visual Recognition from Biased Image Databases

no code implementations6 Sep 2021 Stephan Clémençon, Pierre Laforgue, Robin Vogel

In practice, and especially when training deep neural networks, visual recognition rules are often learned based on various sources of information.

Learning Theory Selection bias

Multitask Online Mirror Descent

no code implementations NeurIPS 2021 Nicolò Cesa-Bianchi, Pierre Laforgue, Andrea Paudice, Massimiliano Pontil

We introduce and analyze MT-OMD, a multitask generalization of Online Mirror Descent (OMD) which operates by sharing updates between tasks.

When OT meets MoM: Robust estimation of Wasserstein Distance

no code implementations18 Jun 2020 Guillaume Staerman, Pierre Laforgue, Pavlo Mozharovskyi, Florence d'Alché-Buc

Issued from Optimal Transport, the Wasserstein distance has gained importance in Machine Learning due to its appealing geometrical properties and the increasing availability of efficient approximations.

Generative Adversarial Network

Generalization Bounds in the Presence of Outliers: a Median-of-Means Study

no code implementations9 Jun 2020 Pierre Laforgue, Guillaume Staerman, Stephan Clémençon

In contrast to the empirical mean, the Median-of-Means (MoM) is an estimator of the mean $\theta$ of a square integrable r. v.

Generalization Bounds Metric Learning

Duality in RKHSs with Infinite Dimensional Outputs: Application to Robust Losses

no code implementations ICML 2020 Pierre Laforgue, Alex Lambert, Luc Brogat-Motte, Florence d'Alché-Buc

Operator-Valued Kernels (OVKs) and associated vector-valued Reproducing Kernel Hilbert Spaces provide an elegant way to extend scalar kernel methods when the output space is a Hilbert space.

regression Representation Learning +1

Statistical Learning from Biased Training Samples

no code implementations28 Jun 2019 Stephan Clémençon, Pierre Laforgue

With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available for learning predictive models.

Selection bias

Autoencoding any Data through Kernel Autoencoders

no code implementations28 May 2018 Pierre Laforgue, Stephan Clémençon, Florence d'Alché-Buc

This paper investigates a novel algorithmic approach to data representation based on kernel methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.