no code implementations • 15 Jan 2024 • Ouiame Marnissi, Hajar El Hammouti, El Houcine Bergou
The federated learning performance depends on the selection of the clients participating in the learning at each round.
no code implementations • 19 Oct 2023 • Aritra Dutta, El Houcine Bergou, Soumia Boucherouite, Nicklas Werge, Melih Kandemir, Xin Li
Additionally, our analyses allow us to measure the density of the $\epsilon$-stationary points in the final iterates of SGD, and we recover the classical $O(\frac{1}{\sqrt{T}})$ asymptotic rate under various existing assumptions on the objective function and the bounds on the stochastic gradient.
no code implementations • 6 Sep 2023 • Mouhamed Naby Ndiaye, El Houcine Bergou, Hajar El Hammouti
To tackle this problem, we first derive a closed-form expression of the expected AoI that involves the probabilities of selection of devices.
no code implementations • 31 Aug 2023 • El Houcine Bergou, Soumia Boucherouite, Aritra Dutta, Xin Li, Anna Ma
In this paper, we analyze the convergence of RK for noisy linear systems when the coefficient matrix, $A$, is corrupted with both additive and multiplicative noise, along with the noisy vector, $b$.
no code implementations • 15 Mar 2023 • Mouhamed Naby Ndiaye, El Houcine Bergou, Hajar El Hammouti
Our objective is to optimally design the UAVs' trajectories and the subsets of visited IoT devices such as the global Age-of-Updates (AoU) is minimized.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 15 Oct 2022 • Latifa Errami, El Houcine Bergou
In this work we study the problem of Byzantine-robust learning when data among clients is heterogeneous.
no code implementations • 16 Sep 2022 • Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou
In this paper, we propose a new zero order optimization method called minibatch stochastic three points (MiSTP) method to solve an unconstrained minimization problem in a setting where only an approximation of the objective function evaluation is possible.
no code implementations • 12 Sep 2022 • El Houcine Bergou, Konstantin Burlachenko, Aritra Dutta, Peter Richtárik
Recently, Hanzely and Richt\'{a}rik (2020) proposed a new formulation for training personalized FL models aimed at balancing the trade-off between the traditional global model and the local models that could be trained by individual devices using their private data only.
no code implementations • 19 Nov 2021 • Ouiame Marnissi, Hajar El Hammouti, El Houcine Bergou
We investigate and design a device selection strategy based on the importance of the gradient norms.
1 code implementation • 19 Nov 2019 • Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis
Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.
1 code implementation • 28 May 2019 • Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik
In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call direct nonlinear acceleration (DNA).