Search Results for author: Mathieu Even

Found 10 papers, 3 papers with code

Asynchronous SGD on Graphs: a Unified Framework for Asynchronous Decentralized and Federated Optimization

no code implementations1 Nov 2023 Mathieu Even, Anastasia Koloskova, Laurent Massoulié

Decentralized and asynchronous communications are two popular techniques to speedup communication complexity of distributed machine learning, by respectively removing the dependency over a central orchestrator and the need for synchronization.

Generalization Error of First-Order Methods for Statistical Learning with Generic Oracles

no code implementations10 Jul 2023 Kevin Scaman, Mathieu Even, Laurent Massoulié

In this paper, we provide a novel framework for the analysis of generalization error of first-order optimization algorithms for statistical learning when the gradient can only be accessed through partial observations given by an oracle.

Quantization Transfer Learning

Stochastic Gradient Descent under Markovian Sampling Schemes

no code implementations28 Feb 2023 Mathieu Even

We study a variation of vanilla stochastic gradient descent where the optimizer only has access to a Markovian sampling scheme.

(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability

no code implementations17 Feb 2023 Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion

In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks.

regression

Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays

1 code implementation15 Jun 2022 Konstantin Mishchenko, Francis Bach, Mathieu Even, Blake Woodworth

The existing analysis of asynchronous stochastic gradient descent (SGD) degrades dramatically when any delay is large, giving the impression that performance depends primarily on the delay.

Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging

1 code implementation10 Jun 2022 Edwige Cyffers, Mathieu Even, Aurélien Bellet, Laurent Massoulié

In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node $u$ to a node $v$ may depend on their relative position in the graph.

Graph Matching

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip

1 code implementation10 Jun 2021 Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, Adrien Taylor

We introduce the continuized Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.

Asynchronous speedup in decentralized optimization

no code implementations7 Jun 2021 Mathieu Even, Hadrien Hendrikx, Laurent Massoulie

Our approach yields a precise characterization of convergence time and of its dependency on heterogeneous delays in the network.

Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization

no code implementations4 Feb 2021 Mathieu Even, Laurent Massoulié

Dimension is an inherent bottleneck to some modern learning tasks, where optimization methods suffer from the size of the data.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.