Search Results for author: Michael Gastpar

Found 17 papers, 1 papers with code

The Fundamental Limits of Least-Privilege Learning

no code implementations19 Feb 2024 Theresa Stadler, Bogdan Kulynych, Nicoals Papernot, Michael Gastpar, Carmela Troncoso

The promise of least-privilege learning -- to find feature representations that are useful for a learning task but prevent inference of any sensitive information unrelated to this task -- is highly appealing.

Attribute

Batch Universal Prediction

no code implementations6 Feb 2024 Marco Bondaschi, Michael Gastpar

Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences.

Attention with Markov: A Framework for Principled Analysis of Transformers via Markov Chains

1 code implementation6 Feb 2024 Ashok Vardhan Makkuva, Marco Bondaschi, Adway Girish, Alliot Nagle, Martin Jaggi, Hyeji Kim, Michael Gastpar

Inspired by the Markovianity of natural languages, we model the data as a Markovian source and utilize this framework to systematically study the interplay between the data-distributional properties, the transformer architecture, the learnt distribution, and the final model performance.

Fantastic Generalization Measures are Nowhere to be Found

no code implementations24 Sep 2023 Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger

We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions.

Generalization Bounds

Lower Bounds on the Bayesian Risk via Information Measures

no code implementations22 Mar 2023 Amedeo Roberto Esposito, Adrien Vandenbroucque, Michael Gastpar

We are thus able to provide estimator-independent impossibility results thanks to the Data-Processing Inequalities that divergences satisfy.

Generalization Error Bounds for Noisy, Iterative Algorithms via Maximal Leakage

no code implementations28 Feb 2023 Ibrahim Issa, Amedeo Roberto Esposito, Michael Gastpar

We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms.

Generalization Bounds

Finite Littlestone Dimension Implies Finite Information Complexity

no code implementations27 Jun 2022 Aditya Pradeep, Ido Nachum, Michael Gastpar

We prove that every online learnable class of functions of Littlestone dimension $d$ admits a learning algorithm with finite information complexity.

From Generalisation Error to Transportation-cost Inequalities and Back

no code implementations8 Feb 2022 Amedeo Roberto Esposito, Michael Gastpar

In this work, we connect the problem of bounding the expected generalisation error with transportation-cost inequalities.

A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs

no code implementations3 Nov 2021 Ido Nachum, Jan Hązła, Michael Gastpar, Anatoly Khina

The celebrated Johnson--Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.

LEMMA

A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs

no code implementations ICLR 2022 Ido Nachum, Jan Hazla, Michael Gastpar, Anatoly Khina

The celebrated Johnson-Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.

LEMMA

Learning, compression, and leakage: Minimising classification error via meta-universal compression principles

no code implementations14 Oct 2020 Fernando E. Rosas, Pedro A. M. Mediano, Michael Gastpar

Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas.

General Classification PAC learning

Common Information Components Analysis

no code implementations3 Feb 2020 Michael Gastpar, Erixhen Sula

We give an information-theoretic interpretation of Canonical Correlation Analysis (CCA) via (relaxed) Wyner's common information.

Robust Generalization via $α$-Mutual Information

no code implementations14 Jan 2020 Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa

The aim of this work is to provide bounds connecting two probability measures of the same event using R\'enyi $\alpha$-Divergences and Sibson's $\alpha$-Mutual Information, a generalization of respectively the Kullback-Leibler Divergence and Shannon's Mutual Information.

Generalization Error Bounds Via Rényi-, $f$-Divergences and Maximal Leakage

no code implementations1 Dec 2019 Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa

In this work, the probability of an event under some joint distribution is bounded by measuring it with the product of the marginals instead (which is typically easier to analyze) together with a measure of the dependence between the two random variables.

Learning Theory

A New Approach to Adaptive Data Analysis and Learning via Maximal Leakage

no code implementations5 Mar 2019 Amedeo Roberto Esposito, Michael Gastpar, Ibrahim Issa

Our contribution consists in the introduction of a new approach, based on the concept of Maximal Leakage, an information-theoretic measure of leakage of information.

Locally Differentially-Private Randomized Response for Discrete Distribution Learning

no code implementations29 Nov 2018 Adriano Pastore, Michael Gastpar

We derive the respective normalized first-order terms of convergence (as $n\to\infty$), which for a given target privacy $\epsilon$ represent a rule-of-thumb factor by which the sample size must be augmented so as to achieve the same estimation accuracy as that of a non-randomizing channel.

Cannot find the paper you are looking for? You can Submit a new open access paper.