no code implementations • 19 Feb 2024 • Theresa Stadler, Bogdan Kulynych, Nicoals Papernot, Michael Gastpar, Carmela Troncoso
The promise of least-privilege learning -- to find feature representations that are useful for a learning task but prevent inference of any sensitive information unrelated to this task -- is highly appealing.
no code implementations • 6 Feb 2024 • Marco Bondaschi, Michael Gastpar
Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences.
1 code implementation • 6 Feb 2024 • Ashok Vardhan Makkuva, Marco Bondaschi, Adway Girish, Alliot Nagle, Martin Jaggi, Hyeji Kim, Michael Gastpar
Inspired by the Markovianity of natural languages, we model the data as a Markovian source and utilize this framework to systematically study the interplay between the data-distributional properties, the transformer architecture, the learnt distribution, and the final model performance.
no code implementations • 24 Sep 2023 • Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger
We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions.
no code implementations • 22 Mar 2023 • Amedeo Roberto Esposito, Adrien Vandenbroucque, Michael Gastpar
We are thus able to provide estimator-independent impossibility results thanks to the Data-Processing Inequalities that divergences satisfy.
no code implementations • 28 Feb 2023 • Ibrahim Issa, Amedeo Roberto Esposito, Michael Gastpar
We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms.
no code implementations • 27 Jun 2022 • Aditya Pradeep, Ido Nachum, Michael Gastpar
We prove that every online learnable class of functions of Littlestone dimension $d$ admits a learning algorithm with finite information complexity.
no code implementations • 8 Feb 2022 • Amedeo Roberto Esposito, Michael Gastpar
In this work, we connect the problem of bounding the expected generalisation error with transportation-cost inequalities.
no code implementations • 3 Nov 2021 • Ido Nachum, Jan Hązła, Michael Gastpar, Anatoly Khina
The celebrated Johnson--Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.
no code implementations • ICLR 2022 • Ido Nachum, Jan Hazla, Michael Gastpar, Anatoly Khina
The celebrated Johnson-Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.
no code implementations • 30 Jul 2021 • Sennur Ulukus, Salman Avestimehr, Michael Gastpar, Syed Jafar, Ravi Tandon, Chao Tian
Most of our lives are conducted in the cyberspace.
no code implementations • 14 Oct 2020 • Fernando E. Rosas, Pedro A. M. Mediano, Michael Gastpar
Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas.
no code implementations • 3 Feb 2020 • Michael Gastpar, Erixhen Sula
We give an information-theoretic interpretation of Canonical Correlation Analysis (CCA) via (relaxed) Wyner's common information.
no code implementations • 14 Jan 2020 • Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa
The aim of this work is to provide bounds connecting two probability measures of the same event using R\'enyi $\alpha$-Divergences and Sibson's $\alpha$-Mutual Information, a generalization of respectively the Kullback-Leibler Divergence and Shannon's Mutual Information.
no code implementations • 1 Dec 2019 • Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa
In this work, the probability of an event under some joint distribution is bounded by measuring it with the product of the marginals instead (which is typically easier to analyze) together with a measure of the dependence between the two random variables.
no code implementations • 5 Mar 2019 • Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa
Our contribution consists in the introduction of a new approach, based on the concept of Maximal Leakage, an information-theoretic measure of leakage of information.
no code implementations • 29 Nov 2018 • Adriano Pastore, Michael Gastpar
We derive the respective normalized first-order terms of convergence (as $n\to\infty$), which for a given target privacy $\epsilon$ represent a rule-of-thumb factor by which the sample size must be augmented so as to achieve the same estimation accuracy as that of a non-randomizing channel.