Search Results for author: Aymeric Dieuleveut

Found 21 papers, 9 papers with code

Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning

no code implementations22 Feb 2023 Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai

Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.

Adaptive Conformal Predictions for Time Series

2 code implementations15 Feb 2022 Margaux Zaffran, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, Julie Josse

While recent works tackled this issue, we argue that Adaptive Conformal Inference (ACI, Gibbs and Cand{\`e}s, 2021), developed for distribution-shift time series, is a good procedure for time series with general dependency.

Conformal Prediction Decision Making +2

Minimax rate of consistency for linear models with missing values

no code implementations3 Feb 2022 Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet

Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...).

PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python

1 code implementation11 Jan 2022 Baptiste Goujaud, Céline Moucer, François Glineur, Julien Hendrickx, Adrien Taylor, Aymeric Dieuleveut

PEPit is a Python package aiming at simplifying the access to worst-case analyses of a large family of first-order optimization methods possibly involving gradient, projection, proximal, or linear optimization oracles, along with their approximate, or Bregman variants.

Differentially Private Federated Learning on Heterogeneous Data

1 code implementation17 Nov 2021 Maxence Noble, Aurélien Bellet, Aymeric Dieuleveut

Federated Learning (FL) is a paradigm for large-scale distributed learning which faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.

Federated Learning

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

no code implementations1 Jun 2021 Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines

The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients.

Federated Learning

Preserved central model for faster bidirectional compression in distributed settings

2 code implementations NeurIPS 2021 Constantin Philippenko, Aymeric Dieuleveut

To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved.

Model Compression

Debiasing Averaged Stochastic Gradient Descent to handle missing values

no code implementations NeurIPS 2020 Aude Sportisse, Claire Boyer, Aymeric Dieuleveut, Julie Josses

Stochastic gradient algorithm is a key ingredient of many machine learning methods, particularly appropriate for large-scale learning.

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent

no code implementations ICML 2020 Scott Pesme, Aymeric Dieuleveut, Nicolas Flammarion

Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which iterates make fast progress towards the optimum, followed by a stationary phase during which iterates oscillate around the optimal point.

Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees

1 code implementation25 Jun 2020 Constantin Philippenko, Aymeric Dieuleveut

We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated setting with communication constraints and device partial participation.

Federated Learning

Communication trade-offs for Local-SGD with large step size

1 code implementation NeurIPS 2019 Aymeric Dieuleveut, Kumar Kshitij Patel

Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.

Communication trade-offs for synchronized distributed SGD with large step size

no code implementations25 Apr 2019 Kumar Kshitij Patel, Aymeric Dieuleveut

Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.

Unsupervised Scalable Representation Learning for Multivariate Time Series

2 code implementations NeurIPS 2019 Jean-Yves Franceschi, Aymeric Dieuleveut, Martin Jaggi

Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice.

BIG-bench Machine Learning Representation Learning +1

Context Mover's Distance & Barycenters: Optimal Transport of Contexts for Building Representations

2 code implementations29 Aug 2018 Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi

We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a vector embedding.

Sentence Embedding Sentence Similarity

Wasserstein is all you need

no code implementations5 Jun 2018 Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi

We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding).

Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains

no code implementations20 Jul 2017 Aymeric Dieuleveut, Alain Durmus, Francis Bach

We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size.

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

no code implementations17 Feb 2016 Aymeric Dieuleveut, Nicolas Flammarion, Francis Bach

We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.