Search Results for author: Milad Sefidgaran

Found 7 papers, 3 papers with code

Minimum Description Length and Generalization Guarantees for Representation Learning

1 code implementation NeurIPS 2023 Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski

Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior.

Generalization Bounds Representation Learning

Federated Learning You May Communicate Less Often!

no code implementations9 Jun 2023 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan

Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.

Federated Learning

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

no code implementations24 Apr 2023 Romain Chor, Milad Sefidgaran, Abdellatif Zaidi

We establish an upper bound on the generalization error that accounts explicitly for the effect of $R$ (in addition to the number of participating devices $K$ and dataset size $n$).

Federated Learning

Data-dependent Generalization Bounds via Variable-Size Compressibility

no code implementations9 Mar 2023 Milad Sefidgaran, Abdellatif Zaidi

In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data.

Generalization Bounds

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

1 code implementation6 Jun 2022 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi

In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms.

Federated Learning Generalization Bounds

Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms

no code implementations4 Mar 2022 Milad Sefidgaran, Amin Gohari, Gaël Richard, Umut Şimşekli

Understanding generalization in modern machine learning settings has been one of the major challenges in statistical learning theory.

Generalization Bounds Learning Theory

Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks

1 code implementation NeurIPS 2021 Melih Barsbey, Milad Sefidgaran, Murat A. Erdogdu, Gaël Richard, Umut Şimşekli

Neural network compression techniques have become increasingly popular as they can drastically reduce the storage and computation requirements for very large networks.

Generalization Bounds Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.