Search Results for author: Aymeric Dieuleveut

Found 27 papers, 11 papers with code

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

no code implementations17 Feb 2016 Aymeric Dieuleveut, Nicolas Flammarion, Francis Bach

We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error.

regression

Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains

no code implementations20 Jul 2017 Aymeric Dieuleveut, Alain Durmus, Francis Bach

We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size.

Wasserstein is all you need

no code implementations5 Jun 2018 Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi

We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding).

Sentence

Context Mover's Distance & Barycenters: Optimal Transport of Contexts for Building Representations

2 code implementations29 Aug 2018 Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi

We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a vector embedding.

Sentence Sentence Embedding +1

Unsupervised Scalable Representation Learning for Multivariate Time Series

2 code implementations NeurIPS 2019 Jean-Yves Franceschi, Aymeric Dieuleveut, Martin Jaggi

Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice.

BIG-bench Machine Learning Representation Learning +2

Communication trade-offs for synchronized distributed SGD with large step size

no code implementations25 Apr 2019 Kumar Kshitij Patel, Aymeric Dieuleveut

Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.

Communication trade-offs for Local-SGD with large step size

1 code implementation NeurIPS 2019 Aymeric Dieuleveut, Kumar Kshitij Patel

Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning.

Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees

1 code implementation25 Jun 2020 Constantin Philippenko, Aymeric Dieuleveut

We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated setting with communication constraints and device partial participation.

Federated Learning

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent

no code implementations ICML 2020 Scott Pesme, Aymeric Dieuleveut, Nicolas Flammarion

Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which iterates make fast progress towards the optimum, followed by a stationary phase during which iterates oscillate around the optimal point.

Debiasing Averaged Stochastic Gradient Descent to handle missing values

no code implementations NeurIPS 2020 Aude Sportisse, Claire Boyer, Aymeric Dieuleveut, Julie Josses

Stochastic gradient algorithm is a key ingredient of many machine learning methods, particularly appropriate for large-scale learning.

Preserved central model for faster bidirectional compression in distributed settings

2 code implementations NeurIPS 2021 Constantin Philippenko, Aymeric Dieuleveut

To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved.

Model Compression

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

no code implementations1 Jun 2021 Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines

The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients.

Federated Learning

Differentially Private Federated Learning on Heterogeneous Data

1 code implementation17 Nov 2021 Maxence Noble, Aurélien Bellet, Aymeric Dieuleveut

Federated Learning (FL) is a paradigm for large-scale distributed learning which faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.

Federated Learning

PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python

1 code implementation11 Jan 2022 Baptiste Goujaud, Céline Moucer, François Glineur, Julien Hendrickx, Adrien Taylor, Aymeric Dieuleveut

PEPit is a Python package aiming at simplifying the access to worst-case analyses of a large family of first-order optimization methods possibly involving gradient, projection, proximal, or linear optimization oracles, along with their approximate, or Bregman variants.

Minimax rate of consistency for linear models with missing values

no code implementations3 Feb 2022 Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet

Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...).

Adaptive Conformal Predictions for Time Series

2 code implementations15 Feb 2022 Margaux Zaffran, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, Julie Josse

While recent works tackled this issue, we argue that Adaptive Conformal Inference (ACI, Gibbs and Cand{\`e}s, 2021), developed for distribution-shift time series, is a good procedure for time series with general dependency.

Conformal Prediction Decision Making +4

Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning

no code implementations22 Feb 2023 Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai

Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.

Conformal Prediction with Missing Values

1 code implementation5 Jun 2023 Margaux Zaffran, Aymeric Dieuleveut, Julie Josse, Yaniv Romano

This motivates our novel generalized conformalized quantile regression framework, missing data augmentation, which yields prediction intervals that are valid conditionally to the patterns of missing values, despite their exponential number.

Conformal Prediction Data Augmentation +5

Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning

no code implementations2 Aug 2023 Constantin Philippenko, Aymeric Dieuleveut

In this paper, we investigate the impact of compression on stochastic gradient algorithms for machine learning, a technique widely used in distributed and federated learning.

Federated Learning regression

Proving Linear Mode Connectivity of Neural Networks via Optimal Transport

2 code implementations29 Oct 2023 Damien Ferbach, Baptiste Goujaud, Gauthier Gidel, Aymeric Dieuleveut

The energy landscape of high-dimensional non-convex optimization problems is crucial to understanding the effectiveness of modern deep neural network architectures.

Linear Mode Connectivity

Compression with Exact Error Distribution for Federated Learning

no code implementations31 Oct 2023 Mahmoud Hegazy, Rémi Leluc, Cheuk Ting Li, Aymeric Dieuleveut

Compression schemes have been extensively used in Federated Learning (FL) to reduce the communication cost of distributed learning.

Federated Learning

Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates

no code implementations2 Feb 2024 Rémi Leluc, Aymeric Dieuleveut, François Portier, Johan Segers, Aigerim Zhuman

Spherical harmonics are polynomials on the sphere that form an orthonormal basis of the set of square-integrable functions on the sphere.

Random features models: a way to study the success of naive imputation

no code implementations6 Feb 2024 Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet

Constant (naive) imputation is still widely used in practice as this is a first easy-to-use technique to deal with missing data.

Imputation

Cannot find the paper you are looking for? You can Submit a new open access paper.