Search Results for author: Romain Chor

Found 3 papers, 1 papers with code

Federated Learning You May Communicate Less Often!

no code implementations9 Jun 2023 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan

Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.

Federated Learning

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

no code implementations24 Apr 2023 Romain Chor, Milad Sefidgaran, Abdellatif Zaidi

We establish an upper bound on the generalization error that accounts explicitly for the effect of $R$ (in addition to the number of participating devices $K$ and dataset size $n$).

Federated Learning

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

1 code implementation6 Jun 2022 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi

In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms.

Federated Learning Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.