Search Results for author: Cynthia Dwork

Found 21 papers, 5 papers with code

HappyMap: A Generalized Multi-calibration Method

no code implementations8 Mar 2023 Zhun Deng, Cynthia Dwork, Linjun Zhang

Fairness is captured by incorporating demographic subgroups into the class of functions~$\mathcal{C}$.

Conformal Prediction Fairness +1

From Pseudorandomness to Multi-Group Fairness and Back

no code implementations21 Jan 2023 Cynthia Dwork, Daniel Lee, Huijia Lin, Pranay Tankala

We identify and explore connections between the recent literature on multi-group fairness for prediction algorithms and the pseudorandomness notions of leakage-resilience and graph regularity.

Fairness LEMMA

Confidence-Ranked Reconstruction of Census Microdata from Published Statistics

1 code implementation6 Nov 2022 Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu

Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.

Reconstruction Attack

Improved Generalization Guarantees in Restricted Data Models

no code implementations20 Jul 2022 Elbert Du, Cynthia Dwork

Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis -- even when the analyst adversarially searches for a statistical estimate that diverges from the true value of the quantity of interest on the underlying population.

Scaffolding Sets

no code implementations4 Nov 2021 Maya Burhanpurkar, Zhun Deng, Cynthia Dwork, Linjun Zhang

Predictors map individual instances in a population to the interval $[0, 1]$.

Outcome Indistinguishability

no code implementations26 Nov 2020 Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona

Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?

Interpreting Robust Optimization via Adversarial Influence Functions

no code implementations ICML 2020 Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang

Robust optimization has been widely used in nowadays data science, especially in adversarial training.

Private Post-GAN Boosting

1 code implementation ICLR 2021 Marcel Neunhoeffer, Zhiwei Steven Wu, Cynthia Dwork

We also provide a non-private variant of PGB that improves the data quality of standard GAN training.

Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations

no code implementations20 Jun 2020 Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur

We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps.

Domain Generalization Fairness

Individual Fairness in Pipelines

no code implementations12 Apr 2020 Cynthia Dwork, Christina Ilvento, Meena Jagadeesan

It is well understood that a system built from individually fair components may not itself be individually fair.

Fairness General Classification

Abstracting Fairness: Oracles, Metrics, and Interpretability

no code implementations4 Apr 2020 Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, Pragya Sur

Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle.

Fairness General Classification

Architecture Selection via the Trade-off Between Accuracy and Robustness

no code implementations4 Jun 2019 Zhun Deng, Cynthia Dwork, Jialiang Wang, Yao Zhao

We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning.

Adversarial Attack

Differentially Private False Discovery Rate Control

no code implementations11 Jul 2018 Cynthia Dwork, Weijie J. Su, Li Zhang

Differential privacy provides a rigorous framework for privacy-preserving data analysis.

Privacy Preserving Two-sample testing

Fairness Under Composition

no code implementations15 Jun 2018 Cynthia Dwork, Christina Ilvento

Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law.

Fairness

Privacy-preserving Prediction

no code implementations27 Mar 2018 Cynthia Dwork, Vitaly Feldman

We demonstrate that this overhead can be avoided for the well-studied class of thresholds on a line and for a number of standard settings of convex regression.

PAC learning Privacy Preserving +1

Decoupled classifiers for fair and efficient machine learning

no code implementations20 Jul 2017 Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, Max Leiserson

When it is ethical and legal to use a sensitive attribute (such as gender or race) in machine learning systems, the question remains how to do so.

Attribute BIG-bench Machine Learning +2

Preserving Statistical Validity in Adaptive Data Analysis

no code implementations10 Nov 2014 Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.

Two-sample testing

Analyze Gauss: Optimal Bounds for Privacy-Preserving Principal Component Analysis

1 code implementation1 May 2014 Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang

We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.

Attribute Privacy Preserving

Learning Fair Representations

2 code implementations International Conference on Machine Learning 2013 Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly).

Classification Fairness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.