no code implementations • ICML 2020 • Giuseppe Vietri, Borja de Balle Pigem, Steven Wu, Akshay Krishnamurthy
Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).
1 code implementation • 5 Jun 2023 • Terrance Liu, Jingwu Tang, Giuseppe Vietri, Zhiwei Steven Wu
We study the problem of efficiently generating differentially private synthetic data that approximate the statistical properties of an underlying sensitive dataset.
1 code implementation • 6 Nov 2022 • Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
no code implementations • 15 Sep 2022 • Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu
A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.
no code implementations • 2 Feb 2022 • Dung Daniel Ngo, Giuseppe Vietri, Zhiwei Steven Wu
We study privacy-preserving exploration in sequential decision-making for environments that rely on sensitive data such as medical records.
1 code implementation • NeurIPS 2021 • Terrance Liu, Giuseppe Vietri, Zhiwei Steven Wu
We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries.
1 code implementation • 17 Feb 2021 • Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, Zhiwei Steven Wu
In many statistical problems, incorporating priors can significantly improve performance.
no code implementations • 23 Sep 2020 • Farzana Beente Yusuf, Vitalii Stebliankin, Giuseppe Vietri, Giri Narasimhan
We derive an optimal learning rate for EXP4-DFDC that defines the balance between exploration and exploitation and proves theoretically that the expected regret of our algorithm is a vanishing quantity as a function of time.
no code implementations • 18 Sep 2020 • Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Zhiwei Steven Wu
Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).
1 code implementation • ICML 2020 • Giuseppe Vietri, Grace Tian, Mark Bun, Thomas Steinke, Zhiwei Steven Wu
We present three new algorithms for constructing differentially private synthetic data---a sanitized version of a sensitive dataset that approximately preserves the answers to a large collection of statistical queries.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.