no code implementations • 9 Jun 2023 • Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan
Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.
no code implementations • 24 Apr 2023 • Romain Chor, Milad Sefidgaran, Abdellatif Zaidi
We establish an upper bound on the generalization error that accounts explicitly for the effect of $R$ (in addition to the number of participating devices $K$ and dataset size $n$).
1 code implementation • 6 Jun 2022 • Milad Sefidgaran, Romain Chor, Abdellatif Zaidi
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms.