1 code implementation • 13 Sep 2024 • Constantin Philippenko, Kevin Scaman, Laurent Massoulié
We provide a linear rate of convergence of the excess loss which depends on $\sigma_{\max} / \sigma_{r}$, where $\sigma_{r}$ is the $r^{\mathrm{th}}$ singular value of the concatenation $\mathbf{S}$ of the matrices $(\mathbf{S}^i)_{i=1}^N$.
no code implementations • 2 Aug 2023 • Constantin Philippenko, Aymeric Dieuleveut
In this paper, we investigate the impact of compression on stochastic gradient algorithms for machine learning, a technique widely used in distributed and federated learning.
1 code implementation • 10 Oct 2022 • Jean Ogier du Terrail, Samy-Safwan Ayed, Edwige Cyffers, Felix Grimberg, Chaoyang He, Regis Loeb, Paul Mangold, Tanguy Marchand, Othmane Marfoq, Erum Mushtaq, Boris Muzellec, Constantin Philippenko, Santiago Silva, Maria Teleńczuk, Shadi Albarqouni, Salman Avestimehr, Aurélien Bellet, Aymeric Dieuleveut, Martin Jaggi, Sai Praneeth Karimireddy, Marco Lorenzi, Giovanni Neglia, Marc Tommasi, Mathieu Andreux
In this work, we propose a novel cross-silo dataset suite focused on healthcare, FLamby (Federated Learning AMple Benchmark of Your cross-silo strategies), to bridge the gap between theory and practice of cross-silo FL.
2 code implementations • NeurIPS 2021 • Constantin Philippenko, Aymeric Dieuleveut
To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved.
1 code implementation • 25 Jun 2020 • Constantin Philippenko, Aymeric Dieuleveut
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated setting with communication constraints and device partial participation.