Search Results for author: Abdellatif Zaidi

Found 15 papers, 3 papers with code

Minimum Description Length and Generalization Guarantees for Representation Learning

1 code implementation NeurIPS 2023 Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski

Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior.

Generalization Bounds Representation Learning

Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD

1 code implementation13 Jun 2023 Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli

Neural network compression has been an increasingly important subject, not only due to its practical relevance, but also due to its theoretical implications, as there is an explicit connection between compressibility and generalization error.

Neural Network Compression

Federated Learning You May Communicate Less Often!

no code implementations9 Jun 2023 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan

Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.

Federated Learning

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

no code implementations24 Apr 2023 Romain Chor, Milad Sefidgaran, Abdellatif Zaidi

We establish an upper bound on the generalization error that accounts explicitly for the effect of $R$ (in addition to the number of participating devices $K$ and dataset size $n$).

Federated Learning

Data-dependent Generalization Bounds via Variable-Size Compressibility

no code implementations9 Mar 2023 Milad Sefidgaran, Abdellatif Zaidi

In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data.

Generalization Bounds

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

1 code implementation6 Jun 2022 Milad Sefidgaran, Romain Chor, Abdellatif Zaidi

In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms.

Federated Learning Generalization Bounds

In-Network Learning: Distributed Training and Inference in Networks

no code implementations7 Jul 2021 Matei Moldoveanu, Abdellatif Zaidi

It is widely perceived that leveraging the success of modern machine learning techniques to mobile devices and wireless networks has the potential of enabling important new services.

On learning parametric distributions from quantized samples

no code implementations25 May 2021 Septimia Sarbu, Abdellatif Zaidi

We consider the problem of learning parametric distributions from their quantized samples in a network.

On In-network learning. A Comparative Study with Federated and Split Learning

no code implementations30 Apr 2021 Matei Moldoveanu, Abdellatif Zaidi

In this paper, we consider a problem in which distributively extracted features are used for performing inference in wireless networks.

Scalable Vector Gaussian Information Bottleneck

no code implementations15 Feb 2021 Mohammad Mahdi Mahvari, Mari Kobayashi, Abdellatif Zaidi

In the context of statistical learning, the Information Bottleneck method seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure.

Variational Inference

On the Relevance-Complexity Region of Scalable Information Bottleneck

no code implementations2 Nov 2020 Mohammad Mahdi Mahvari, Mari Kobayashi, Abdellatif Zaidi

The Information Bottleneck method is a learning technique that seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure.

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views

no code implementations31 Jan 2020 Abdellatif Zaidi, Inaki Estella Aguerri, Shlomo Shamai

This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects.

Representation Learning Variational Inference

An Information Theoretic Approach to Distributed Representation Learning

no code implementations25 Sep 2019 Abdellatif Zaidi, Inaki Estella Aguerri

The problem of distributed representation learning is one in which multiple sources of information X1,..., XK are processed separately so as to extract useful information about some statistically correlated ground truth Y.

Representation Learning Variational Inference

Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding

no code implementations28 May 2019 Yigit Ugur, George Arvanitakis, Abdellatif Zaidi

In this paper, we develop an unsupervised generative clustering framework that combines the Variational Information Bottleneck and the Gaussian Mixture Model.

Clustering Variational Inference

Distributed Variational Representation Learning

no code implementations11 Jul 2018 Inaki Estella Aguerri, Abdellatif Zaidi

The problem of distributed representation learning is one in which multiple sources of information $X_1,\ldots, X_K$ are processed separately so as to learn as much information as possible about some ground truth $Y$.

Representation Learning Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.