no code implementations • 1 Nov 2023 • Mathieu Even, Anastasia Koloskova, Laurent Massoulié
Decentralized and asynchronous communications are two popular techniques to speedup communication complexity of distributed machine learning, by respectively removing the dependency over a central orchestrator and the need for synchronization.
no code implementations • 10 Jul 2023 • Kevin Scaman, Mathieu Even, Laurent Massoulié
In this paper, we provide a novel framework for the analysis of generalization error of first-order optimization algorithms for statistical learning when the gradient can only be accessed through partial observations given by an oracle.
no code implementations • 28 Feb 2023 • Mathieu Even
We study a variation of vanilla stochastic gradient descent where the optimizer only has access to a Markovian sampling scheme.
no code implementations • 17 Feb 2023 • Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion
In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks.
1 code implementation • 15 Jun 2022 • Konstantin Mishchenko, Francis Bach, Mathieu Even, Blake Woodworth
The existing analysis of asynchronous stochastic gradient descent (SGD) degrades dramatically when any delay is large, giving the impression that performance depends primarily on the delay.
1 code implementation • 10 Jun 2022 • Edwige Cyffers, Mathieu Even, Aurélien Bellet, Laurent Massoulié
In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node $u$ to a node $v$ may depend on their relative position in the graph.
no code implementations • NeurIPS 2021 • Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Hadrien Hendrikx, Pierre Gaillard, Laurent Massoulié, Adrien Taylor
We introduce the ``continuized'' Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.
1 code implementation • 10 Jun 2021 • Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, Adrien Taylor
We introduce the continuized Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.
no code implementations • 7 Jun 2021 • Mathieu Even, Hadrien Hendrikx, Laurent Massoulie
Our approach yields a precise characterization of convergence time and of its dependency on heterogeneous delays in the network.
no code implementations • 4 Feb 2021 • Mathieu Even, Laurent Massoulié
Dimension is an inherent bottleneck to some modern learning tasks, where optimization methods suffer from the size of the data.