no code implementations • NeurIPS 2021 • Louis Leconte, Aymeric Dieuleveut, Edouard Oyallon, Eric Moulines, Gilles Pages
The growing size of models and datasets have made distributed implementation of stochastic gradient descent (SGD) an active field of research.
no code implementations • 4 Jun 2021 • Nathan Grinsztajn, Louis Leconte, Philippe Preux, Edouard Oyallon
We present a new approach for learning unsupervised node representations in community graphs.
no code implementations • 11 Jun 2021 • Eugene Belilovsky, Louis Leconte, Lucas Caccia, Michael Eickenberg, Edouard Oyallon
With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays.
no code implementations • 7 Nov 2022 • Louis Leconte, Sholom Schechtman, Eric Moulines
First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems.
no code implementations • 25 May 2023 • Louis Leconte, Van Minh Nguyen, Eric Moulines
In this paper, we propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource-constrained environments.
no code implementations • 29 Jan 2024 • Louis Leconte
The notion of Boolean logic backpropagation was introduced to build neural networks with weights and activations being Boolean numbers.