no code implementations • 11 Jul 2024 • Alex Oesterling, Claudio Mayrink Verdun, Carol Xuan Long, Alex Glynn, Lucas Monteiro Paes, Sajani Vithana, Martina Cardone, Flavio P. Calmon

We introduce Multi-Group Proportional Representation (MPR), a novel metric that measures representation across intersectional groups.

no code implementations • 3 Jul 2024 • Sajani Vithana, Viveck R. Cadambe, Flavio P. Calmon, Haewon Jeong

Differentially private distributed mean estimation (DP-DME) is a fundamental building block in privacy-preserving federated learning, where a central server estimates the mean of $d$-dimensional vectors held by $n$ users while ensuring $(\epsilon,\delta)$-DP.

no code implementations • 29 May 2024 • Lucas Monteiro Paes, Dennis Wei, Flavio P. Calmon

Feature attribution methods explain black-box machine learning (ML) models by assigning importance scores to input features.

no code implementations • 26 Feb 2024 • Juan Felipe Gomez, Caio Vieira Machado, Lucas Monteiro Paes, Flavio P. Calmon

Our findings also contribute to content moderation and intermediary liability laws being discussed and passed in many countries, such as the Digital Services Act in the European Union, the Online Safety Act in the United Kingdom, and the Fake News Bill in Brazil.

1 code implementation • 16 Feb 2024 • Usha Bhalla, Alex Oesterling, Suraj Srinivas, Flavio P. Calmon, Himabindu Lakkaraju

CLIP embeddings have demonstrated remarkable performance across a wide range of computer vision tasks.

no code implementations • 6 Dec 2023 • Lucas Monteiro Paes, Ananda Theertha Suresh, Alex Beutel, Flavio P. Calmon, Ahmad Beirami

Here, the sample complexity for estimating the worst-case performance gap across groups (e. g., the largest difference in error rates) increases exponentially with the number of group-denoting sensitive attributes.

no code implementations • 28 Sep 2023 • Viveck R. Cadambe, Ateet Devulapalli, Haewon Jeong, Flavio P. Calmon

We consider the problem of private distributed multi-party multiplication.

1 code implementation • 27 Jul 2023 • Alex Oesterling, Jiaqi Ma, Flavio P. Calmon, Hima Lakkaraju

In this work, we demonstrate that most efficient unlearning methods cannot accommodate popular fairness interventions, and we propose the first fair machine unlearning method that can efficiently unlearn data instances from a fair objective.

1 code implementation • 15 Jun 2023 • Carol Xuan Long, Hsiang Hsu, Wael Alghamdi, Flavio P. Calmon

Machine learning tasks may admit multiple competing models that achieve similar performance yet produce conflicting outputs for individual samples -- a phenomenon known as predictive multiplicity.

1 code implementation • 10 Mar 2023 • Haitong Ma, Tianpeng Zhang, Yixuan Wu, Flavio P. Calmon, Na Li

We focus on Entropy Search (ES), a sample-efficient BO algorithm that selects queries to maximize the mutual information about the maximum of the black-box function.

1 code implementation • 28 Feb 2023 • Bogdan Kulynych, Hsiang Hsu, Carmela Troncoso, Flavio P. Calmon

We demonstrate that such randomization incurs predictive multiplicity: for a given input example, the output predicted by equally-private models depends on the randomness used in training.

no code implementations • NeurIPS 2023 • Hao Wang, Luxi He, Rui Gao, Flavio P. Calmon

We categorize sources of discrimination in the ML pipeline into two classes: aleatoric discrimination, which is inherent in the data distribution, and epistemic discrimination, which is due to decisions made during model development.

1 code implementation • 17 Sep 2022 • Marguerite B. Basta, Sarfaraz Hussein, Hsiang Hsu, Flavio P. Calmon

Then, the identified tumors are passed to a second CNN for recurrence risk prediction.

no code implementations • 20 Aug 2022 • Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Juan Felipe Gomez, Oliver Kosut, Lalitha Sankar, Fei Wei

SPA approximates privacy guarantees for the composition of DP mechanisms in an accurate and fast manner.

1 code implementation • 11 Jul 2022 • Behrooz Razeghi, Flavio P. Calmon, Deniz Gunduz, Slava Voloshynovskiy

In this work, we propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model, which (i) provides a unified theoretical framework that generalizes most of the state-of-the-art literature for the information-theoretic privacy models, (ii) establishes a new interpretation of the popular generative and discriminative models, (iii) constructs new insights to the generative compression models, and (iv) can be used in the fair generative models.

no code implementations • 25 Jun 2022 • Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar, Fei Wei

Since the optimization problem is infinite dimensional, it cannot be solved directly; nevertheless, we quantize the problem to derive near-optimal additive mechanisms that we call "cactus mechanisms" due to their shape.

1 code implementation • 15 Jun 2022 • Wael Alghamdi, Hsiang Hsu, Haewon Jeong, Hao Wang, P. Winston Michalak, Shahab Asoodeh, Flavio P. Calmon

We consider the problem of producing fair probabilistic classifiers for multi-class classification tasks.

no code implementations • 21 Sep 2021 • Haewon Jeong, Hao Wang, Flavio P. Calmon

We investigate the fairness concerns of training a machine learning model using data with missing values.

no code implementations • 11 Feb 2021 • Wael Alghamdi, Flavio P. Calmon

We consider a channel $Y=X+N$ where $X$ is a random variable satisfying $\mathbb{E}[|X|]<\infty$ and $N$ is an independent standard normal random variable.

Information Theory Information Theory Probability

no code implementations • NeurIPS 2021 • Hao Wang, Rui Gao, Flavio P. Calmon

In this paper, we analyze the generalization of models trained by noisy iterative algorithms.

no code implementations • 2 Feb 2021 • Shahab Asoodeh, Maryam Aliakbarpour, Flavio P. Calmon

We investigate the local differential privacy (LDP) guarantees of a randomized privacy mechanism via its contraction properties.

no code implementations • 20 Dec 2020 • Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

First, it implies that local differential privacy can be equivalently expressed in terms of the contraction of $E_\gamma$-divergence.

no code implementations • 14 Aug 2020 • Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP.

1 code implementation • ICLR 2021 • Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon

Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.

no code implementations • 12 Feb 2020 • Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon

To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).

1 code implementation • 4 Feb 2020 • Sohrab Ferdowsi, Behrooz Razeghi, Taras Holotyak, Flavio P. Calmon, Slava Voloshynovskiy

We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups.

no code implementations • 17 Jan 2020 • Shahab Asoodeh, Mario Diaz, Flavio P. Calmon

We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.

no code implementations • 16 Jan 2020 • Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R\'enyi differential privacy (RDP).

2 code implementations • 21 Feb 2019 • Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies.

1 code implementation • 29 Jan 2019 • Hao Wang, Berk Ustun, Flavio P. Calmon

When the performance of a machine learning model varies over groups defined by sensitive attributes (e. g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group.

no code implementations • 29 Nov 2018 • Hsiang Hsu, Flavio P. Calmon, José Cândido Silveira Santos Filho, Andre P. Calmon, Salman Salamatian

We analyze expenditure patterns of discretionary funds by Brazilian congress members.

no code implementations • 21 Jun 2018 • Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

In this paper, we provide a novel interpretation of CA in terms of an information-theoretic quantity called the principal inertia components.

no code implementations • 16 Feb 2018 • Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, Flavio P. Calmon

Given a pair of random variables $(X, Y)\sim P_{XY}$ and two convex functions $f_1$ and $f_2$, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs $\left(I_{f_1}(W; X), I_{f_2}(W; Y)\right)$, where $I_f$ denotes $f$-information and $W$ varies over the set of all discrete random variables satisfying the Markov condition $W \to X \to Y$.

no code implementations • 16 Jan 2018 • Hao Wang, Berk Ustun, Flavio P. Calmon

In the context of machine learning, disparate impact refers to a form of systematic discrimination whereby the output distribution of a model depends on the value of a sensitive attribute (e. g., race or gender).

no code implementations • 2 Oct 2017 • Hao Wang, Lisa Vo, Flavio P. Calmon, Muriel Médard, Ken R. Duffy, Mayank Varia

Here, an analyst is allowed to reconstruct (in a mean-squared error sense) certain functions of the data (utility), while other private functions should not be reconstructed with distortion below a certain threshold (privacy).

1 code implementation • 11 Apr 2017 • Flavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush R. Varshney

Non-discrimination is a recognized objective in algorithmic decision making.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.