Search Results for author: Clément Calauzènes

Found 14 papers, 2 papers with code

Robust Consensus in Ranking Data Analysis: Definitions, Properties and Computational Issues

1 code implementation22 Mar 2023 Morgane Goibert, Clément Calauzènes, Ekhine Irurozki, Stéphan Clémençon

As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed.

Jointly Efficient and Optimal Algorithms for Logistic Bandits

2 code implementations6 Jan 2022 Louis Faury, Marc Abeille, Kwang-Sung Jun, Clément Calauzènes

Logistic Bandits have recently undergone careful scrutiny by virtue of their combined theoretical and practical relevance.

Computational Efficiency

Pure Exploration and Regret Minimization in Matching Bandits

no code implementations31 Jul 2021 Flore Sentenac, Jialin Yi, Clément Calauzènes, Vianney Perchet, Milan Vojnovic

Finding an optimal matching in a weighted graph is a standard combinatorial problem.

Regret Bounds for Generalized Linear Bandits under Parameter Drift

no code implementations9 Mar 2021 Louis Faury, Yoan Russac, Marc Abeille, Clément Calauzènes

Generalized Linear Bandits (GLBs) are powerful extensions to the Linear Bandit (LB) setting, broadening the benefits of reward parametrization beyond linearity.

Wasserstein Learning of Determinantal Point Processes

no code implementations NeurIPS Workshop LMCA 2020 Lucas Anquetil, Mike Gartrell, Alain Rakotomamonjy, Ugo Tanielian, Clément Calauzènes

Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.

Point Processes

Instance-Wise Minimax-Optimal Algorithms for Logistic Bandits

no code implementations23 Oct 2020 Marc Abeille, Louis Faury, Clément Calauzènes

It was shown by Faury et al. (2020) that the learning-theoretic difficulties of Logistic Bandits can be embodied by a large (sometimes prohibitively) problem-dependent constant $\kappa$, characterizing the magnitude of the reward's non-linearity.

Real-Time Optimisation for Online Learning in Auctions

no code implementations ICML 2020 Lorenzo Croissant, Marc Abeille, Clément Calauzènes

In display advertising, a small group of sellers and bidders face each other in up to 10 12 auctions a day.

Do Not Mask What You Do Not Need to Mask: a Parser-Free Virtual Try-On

no code implementations ECCV 2020 Thibaut Issenhuth, Jérémie Mary, Clément Calauzènes

This task requires fitting an in-shop cloth image on the image of a person, which is highly challenging because it involves cloth warping, image compositing, and synthesizing.

Image Generation Virtual Try-on

Improved Optimistic Algorithms for Logistic Bandits

no code implementations ICML 2020 Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq

For logistic bandits, the frequentist regret guarantees of existing algorithms are $\tilde{\mathcal{O}}(\kappa \sqrt{T})$, where $\kappa$ is a problem-dependent constant.

Bridging the gap between regret minimization and best arm identification, with application to A/B tests

no code implementations9 Oct 2018 Rémy Degenne, Thomas Nedelec, Clément Calauzènes, Vianney Perchet

State of the art online learning procedures focus either on selecting the best alternative ("best arm identification") or on minimizing the cost (the "regret").

Neural Generative Models for Global Optimization with Gradients

no code implementations22 May 2018 Louis Faury, Flavian vasile, Clément Calauzènes, Olivier Fercoq

The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones.

Bayesian Optimization Gaussian Processes

Offline A/B testing for Recommender Systems

no code implementations22 Jan 2018 Alexandre Gilotte, Clément Calauzènes, Thomas Nedelec, Alexandre Abraham, Simon Dollé

Before A/B testing online a new version of a recommender system, it is usual to perform some offline evaluations on historical data.

counterfactual Product Recommendation +1

Distributed SAGA: Maintaining linear convergence rate with limited communication

no code implementations29 May 2017 Clément Calauzènes, Nicolas Le Roux

In recent years, variance-reducing stochastic methods have shown great practical performance, exhibiting linear convergence rate when other stochastic methods offered a sub-linear rate.

Cannot find the paper you are looking for? You can Submit a new open access paper.