Search Results for author: Shahab Asoodeh

Found 13 papers, 0 papers with code

Local Differential Privacy Is Equivalent to Contraction of $E_γ$-Divergence

no code implementations2 Feb 2021 Shahab Asoodeh, Maryam Aliakbarpour, Flavio P. Calmon

We investigate the local differential privacy (LDP) guarantees of a randomized privacy mechanism via its contraction properties.

Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

no code implementations20 Dec 2020 Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences.

online learning

Bottleneck Problems: Information and Estimation-Theoretic View

no code implementations12 Nov 2020 Shahab Asoodeh, Flavio Calmon

Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e. g., Mrs. Gerber's Lemma), strong data processing inequalities, among others.

Two-sample testing

Three Variants of Differential Privacy: Lossless Conversion and Applications

no code implementations14 Aug 2020 Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP.

Privacy Amplification of Iterative Algorithms via Contraction Coefficients

no code implementations17 Jan 2020 Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via $f$-Divergences

no code implementations16 Jan 2020 Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R\'enyi differential privacy (RDP).

Obfuscation via Information Density Estimation

no code implementations17 Oct 2019 Hsiang Hsu, Shahab Asoodeh, Flavio du Pin Calmon

The core of this mechanism relies on a data-driven estimate of the trimmed information density for which we propose a novel estimator, named the trimmed information density estimator (TIDE).

Density Estimation

Wasserstein Soft Label Propagation on Hypergraphs: Algorithm and Generalization Error Bounds

no code implementations6 Sep 2018 Tingran Gao, Shahab Asoodeh, Yi Huang, James Evans

Inspired by recent interests of developing machine learning and data mining algorithms on hypergraphs, we investigate in this paper the semi-supervised learning algorithm of propagating "soft labels" (e. g. probability distributions, class membership scores) over hypergraphs, by means of optimal transportation.

A Tamper-Free Semi-Universal Communication System for Deletion Channels

no code implementations9 Apr 2018 Shahab Asoodeh, Yi Huang, Ishanu Chattopadhyay

We investigate the problem of reliable communication between two legitimate parties over deletion channels under an active eavesdropping (aka jamming) adversarial model.

Curvature of Hypergraphs via Multi-Marginal Optimal Transport

no code implementations22 Mar 2018 Shahab Asoodeh, Tingran Gao, James Evans

We introduce a novel definition of curvature for hypergraphs, a natural generalization of graphs, by introducing a multi-marginal optimal transport problem for a naturally defined random walk on the hypergraph.

Generalizing Bottleneck Problems

no code implementations16 Feb 2018 Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, Flavio P. Calmon

Given a pair of random variables $(X, Y)\sim P_{XY}$ and two convex functions $f_1$ and $f_2$, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs $\left(I_{f_1}(W; X), I_{f_2}(W; Y)\right)$, where $I_f$ denotes $f$-information and $W$ varies over the set of all discrete random variables satisfying the Markov condition $W \to X \to Y$.

Information Extraction Under Privacy Constraints

no code implementations7 Nov 2015 Shahab Asoodeh, Mario Diaz, Fady Alajaji, Tamás Linder

To this end, the so-called {\em rate-privacy function} is introduced to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from $Y$ under a privacy constraint between $X$ and the extracted information, where privacy is measured using either mutual information or maximal correlation.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.