Search Results for author: Giuseppe Durisi

Found 14 papers, 0 papers with code

Secure Aggregation is Not Private Against Membership Inference Attacks

no code implementations26 Mar 2024 Khac-Hoang Ngo, Johan Östman, Giuseppe Durisi, Alexandre Graell i Amat

In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update.

Federated Learning Privacy Preserving

A TDD Distributed MIMO Testbed Using a 1-Bit Radio-Over-Fiber Fronthaul Architecture

no code implementations26 Mar 2024 Lise Aabel, Sven Jacobsson, Mikael Coldrey, Frida Olofsson, Giuseppe Durisi, Christian Fager

The CU is connected to multiple single-antenna remote radio heads (RRHs) via optical fibers, over which a binary RF waveform is transmitted.

Generalization Bounds: Perspectives from Information Theory and PAC-Bayes

no code implementations8 Sep 2023 Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky

Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones.

Generalization Bounds

A New Family of Generalization Bounds Using Samplewise Evaluated CMI

no code implementations12 Oct 2022 Fredrik Hellström, Giuseppe Durisi

Furthermore, using the evaluated CMI, we derive a samplewise, average version of Seeger's PAC-Bayesian bound, where the convex function is the binary KL divergence.

Generalization Bounds

Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness

no code implementations12 Oct 2022 Fredrik Hellström, Giuseppe Durisi

Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity measures for conventional learning (Harutyunyan et al., 2021, Haghifam et al., 2021).

Generalization Bounds Learning Theory +2

Resolution-Adaptive All-Digital Spatial Equalization for mmWave Massive MU-MIMO

no code implementations23 Jul 2021 Oscar Castañeda, Seyed Hadi Mirfarshbafan, Shahaboddin Ghajari, Alyosha Molnar, Sven Jacobsson, Giuseppe Durisi, Christoph Studer

All-digital basestation (BS) architectures for millimeter-wave (mmWave) massive multi-user multiple-input multiple-output (MU-MIMO), which equip each radio-frequency chain with dedicated data converters, have advantages in spectral efficiency, flexibility, and baseband-processing simplicity over hybrid analog-digital solutions.

Distortion-Aware Linear Precoding for Massive MIMO Downlink Systems with Nonlinear Power Amplifiers

no code implementations24 Dec 2020 Sina Rezaei Aghdam, Sven Jacobsson, Ulf Gustavsson, Giuseppe Durisi, Christoph Studer, Thomas Eriksson

By studying the spatial characteristics of the distortion, we demonstrate that conventional linear precoding techniques steer nonlinear distortions towards the users.

Information Theory Signal Processing Information Theory

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

no code implementations4 Nov 2020 Sharu Theresa Jose, Osvaldo Simeone, Giuseppe Durisi

In this paper, we introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing that may differ from the source task environment observed during meta-training.

Inductive Bias Meta-Learning

Fast-Rate Loss Bounds via Conditional Information Measures with Applications to Neural Networks

no code implementations22 Oct 2020 Fredrik Hellström, Giuseppe Durisi

If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$.

Conditional Mutual Information-Based Generalization Bound for Meta Learning

no code implementations21 Oct 2020 Arezou Rezazadeh, Sharu Theresa Jose, Giuseppe Durisi, Osvaldo Simeone

Meta-learning optimizes an inductive bias---typically in the form of the hyperparameters of a base-learning algorithm---by observing data from a finite number of related tasks.

Inductive Bias Meta-Learning

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

no code implementations28 Sep 2020 Fredrik Hellström, Giuseppe Durisi

If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$, which is referred to as a fast rate.

High-Bandwidth Spatial Equalization for mmWave Massive MU-MIMO with Processing-In-Memory

no code implementations8 Sep 2020 Oscar Castañeda, Sven Jacobsson, Giuseppe Durisi, Tom Goldstein, Christoph Studer

All-digital basestation (BS) architectures enable superior spectral efficiency compared to hybrid solutions in massive multi-user MIMO systems.

Generalization Bounds via Information Density and Conditional Information Density

no code implementations16 May 2020 Fredrik Hellström, Giuseppe Durisi

We present a general approach, based on exponential inequalities, to derive bounds on the generalization error of randomized learning algorithms.

Generalization Bounds

Generalization Error Bounds via $m$th Central Moments of the Information Density

no code implementations20 Apr 2020 Fredrik Hellström, Giuseppe Durisi

Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used - as often assumed in the probably approximately correct (PAC)-Bayesian literature - and in the single-draw case, where the hypothesis is extracted only once.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.