no code implementations • 17 May 2023 • Shahab Asoodeh, Mario Diaz
The Noisy-SGD algorithm is widely used for privately training machine learning models.
no code implementations • 20 Dec 2020 • Shahab Asoodeh, Mario Diaz, Flavio P. Calmon
First, it implies that local differential privacy can be equivalently expressed in terms of the contraction of $E_\gamma$-divergence.
no code implementations • 22 Jun 2020 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy
We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.
no code implementations • 12 Feb 2020 • Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon
To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).
no code implementations • 17 Jan 2020 • Shahab Asoodeh, Mario Diaz, Flavio P. Calmon
We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.
no code implementations • 8 Nov 2019 • Mario Diaz, Peter Kairouz, Jiachun Liao, Lalitha Sankar
Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data.
1 code implementation • 5 Jun 2019 • Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.
no code implementations • 12 Feb 2019 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz
We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).
no code implementations • 7 Nov 2015 • Shahab Asoodeh, Mario Diaz, Fady Alajaji, Tamás Linder
To this end, the so-called {\em rate-privacy function} is introduced to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from $Y$ under a privacy constraint between $X$ and the extracted information, where privacy is measured using either mutual information or maximal correlation.