Search Results for author: Mario Diaz

Found 9 papers, 1 papers with code

Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses

no code implementations17 May 2023 Shahab Asoodeh, Mario Diaz

The Noisy-SGD algorithm is widely used for privately training machine learning models.

Contraction of $E_γ$-Divergence and Its Applications to Privacy

no code implementations20 Dec 2020 Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

First, it implies that local differential privacy can be equivalently expressed in terms of the contraction of $E_\gamma$-divergence.

On the alpha-loss Landscape in the Logistic Model

no code implementations22 Jun 2020 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy

We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.

To Split or Not to Split: The Impact of Disparate Treatment in Classification

no code implementations12 Feb 2020 Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon

To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).

Attribute General Classification

Privacy Amplification of Iterative Algorithms via Contraction Coefficients

no code implementations17 Jan 2020 Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.

Theoretical Guarantees for Model Auditing with Finite Adversaries

no code implementations8 Nov 2019 Mario Diaz, Peter Kairouz, Jiachun Liao, Lalitha Sankar

Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data.

Privacy Preserving

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

1 code implementation5 Jun 2019 Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.

Classification General Classification +1

A Tunable Loss Function for Binary Classification

no code implementations12 Feb 2019 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz

We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).

Binary Classification Classification +2

Information Extraction Under Privacy Constraints

no code implementations7 Nov 2015 Shahab Asoodeh, Mario Diaz, Fady Alajaji, Tamás Linder

To this end, the so-called {\em rate-privacy function} is introduced to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from $Y$ under a privacy constraint between $X$ and the extracted information, where privacy is measured using either mutual information or maximal correlation.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.