no code implementations • 22 Feb 2023 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.
1 code implementation • 10 Oct 2022 • Jean Ogier du Terrail, Samy-Safwan Ayed, Edwige Cyffers, Felix Grimberg, Chaoyang He, Regis Loeb, Paul Mangold, Tanguy Marchand, Othmane Marfoq, Erum Mushtaq, Boris Muzellec, Constantin Philippenko, Santiago Silva, Maria Teleńczuk, Shadi Albarqouni, Salman Avestimehr, Aurélien Bellet, Aymeric Dieuleveut, Martin Jaggi, Sai Praneeth Karimireddy, Marco Lorenzi, Giovanni Neglia, Marc Tommasi, Mathieu Andreux
In this work, we propose a novel cross-silo dataset suite focused on healthcare, FLamby (Federated Learning AMple Benchmark of Your cross-silo strategies), to bridge the gap between theory and practice of cross-silo FL.
2 code implementations • 15 Feb 2022 • Margaux Zaffran, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, Julie Josse
While recent works tackled this issue, we argue that Adaptive Conformal Inference (ACI, Gibbs and Cand{\`e}s, 2021), developed for distribution-shift time series, is a good procedure for time series with general dependency.
no code implementations • 3 Feb 2022 • Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet
Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...).
1 code implementation • 11 Jan 2022 • Baptiste Goujaud, Céline Moucer, François Glineur, Julien Hendrickx, Adrien Taylor, Aymeric Dieuleveut
PEPit is a Python package aiming at simplifying the access to worst-case analyses of a large family of first-order optimization methods possibly involving gradient, projection, proximal, or linear optimization oracles, along with their approximate, or Bregman variants.
no code implementations • NeurIPS 2021 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Geneviève Robin
The Expectation Maximization (EM) algorithm is the default algorithm for inference in latent variable models.
1 code implementation • 17 Nov 2021 • Maxence Noble, Aurélien Bellet, Aymeric Dieuleveut
Federated Learning (FL) is a paradigm for large-scale distributed learning which faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
no code implementations • 3 Nov 2021 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Geneviève Robin
The Expectation Maximization (EM) algorithm is the default algorithm for inference in latent variable models.
no code implementations • 1 Jun 2021 • Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines
The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients.
no code implementations • NeurIPS 2021 • Louis Leconte, Aymeric Dieuleveut, Edouard Oyallon, Eric Moulines, Gilles Pages
The growing size of models and datasets have made distributed implementation of stochastic gradient descent (SGD) an active field of research.
2 code implementations • NeurIPS 2021 • Constantin Philippenko, Aymeric Dieuleveut
To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved.
no code implementations • NeurIPS 2020 • Aude Sportisse, Claire Boyer, Aymeric Dieuleveut, Julie Josses
Stochastic gradient algorithm is a key ingredient of many machine learning methods, particularly appropriate for large-scale learning.
no code implementations • ICML 2020 • Scott Pesme, Aymeric Dieuleveut, Nicolas Flammarion
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which iterates make fast progress towards the optimum, followed by a stationary phase during which iterates oscillate around the optimal point.
1 code implementation • 25 Jun 2020 • Constantin Philippenko, Aymeric Dieuleveut
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated setting with communication constraints and device partial participation.
1 code implementation • NeurIPS 2019 • Aymeric Dieuleveut, Kumar Kshitij Patel
Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.
no code implementations • 25 Apr 2019 • Kumar Kshitij Patel, Aymeric Dieuleveut
Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.
2 code implementations • NeurIPS 2019 • Jean-Yves Franceschi, Aymeric Dieuleveut, Martin Jaggi
Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice.
2 code implementations • 29 Aug 2018 • Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi
We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a vector embedding.
no code implementations • 5 Jun 2018 • Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi
We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding).
no code implementations • 20 Jul 2017 • Aymeric Dieuleveut, Alain Durmus, Francis Bach
We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size.
no code implementations • 17 Feb 2016 • Aymeric Dieuleveut, Nicolas Flammarion, Francis Bach
We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error.