1 code implementation • NeurIPS 2023 • Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski
Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior.
1 code implementation • 13 Jun 2023 • Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli
Neural network compression has been an increasingly important subject, not only due to its practical relevance, but also due to its theoretical implications, as there is an explicit connection between compressibility and generalization error.
no code implementations • 9 Jun 2023 • Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan
Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.
no code implementations • 24 Apr 2023 • Romain Chor, Milad Sefidgaran, Abdellatif Zaidi
We establish an upper bound on the generalization error that accounts explicitly for the effect of $R$ (in addition to the number of participating devices $K$ and dataset size $n$).
no code implementations • 9 Mar 2023 • Milad Sefidgaran, Abdellatif Zaidi
In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data.
1 code implementation • 6 Jun 2022 • Milad Sefidgaran, Romain Chor, Abdellatif Zaidi
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms.
no code implementations • 7 Jul 2021 • Matei Moldoveanu, Abdellatif Zaidi
It is widely perceived that leveraging the success of modern machine learning techniques to mobile devices and wireless networks has the potential of enabling important new services.
no code implementations • 25 May 2021 • Septimia Sarbu, Abdellatif Zaidi
We consider the problem of learning parametric distributions from their quantized samples in a network.
no code implementations • 30 Apr 2021 • Matei Moldoveanu, Abdellatif Zaidi
In this paper, we consider a problem in which distributively extracted features are used for performing inference in wireless networks.
no code implementations • 15 Feb 2021 • Mohammad Mahdi Mahvari, Mari Kobayashi, Abdellatif Zaidi
In the context of statistical learning, the Information Bottleneck method seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure.
no code implementations • 2 Nov 2020 • Mohammad Mahdi Mahvari, Mari Kobayashi, Abdellatif Zaidi
The Information Bottleneck method is a learning technique that seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure.
no code implementations • 31 Jan 2020 • Abdellatif Zaidi, Inaki Estella Aguerri, Shlomo Shamai
This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects.
no code implementations • 25 Sep 2019 • Abdellatif Zaidi, Inaki Estella Aguerri
The problem of distributed representation learning is one in which multiple sources of information X1,..., XK are processed separately so as to extract useful information about some statistically correlated ground truth Y.
no code implementations • 28 May 2019 • Yigit Ugur, George Arvanitakis, Abdellatif Zaidi
In this paper, we develop an unsupervised generative clustering framework that combines the Variational Information Bottleneck and the Gaussian Mixture Model.
no code implementations • 11 Jul 2018 • Inaki Estella Aguerri, Abdellatif Zaidi
The problem of distributed representation learning is one in which multiple sources of information $X_1,\ldots, X_K$ are processed separately so as to learn as much information as possible about some ground truth $Y$.