no code implementations • 24 Nov 2023 • Xinwei Zhang, Zhiqi Bu, Zhiwei Steven Wu, Mingyi Hong
In our work, we propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC, which not only offers a diminishing utility bound without inducing a constant clipping bias, but more importantly, it allows for an arbitrary choice of clipping threshold that is independent of the problem.
1 code implementation • 5 Jun 2023 • Terrance Liu, Jingwu Tang, Giuseppe Vietri, Zhiwei Steven Wu
We study the problem of efficiently generating differentially private synthetic data that approximate the statistical properties of an underlying sensitive dataset.
1 code implementation • 26 Mar 2023 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
In this work, we demonstrate for the first time a more informed imitation learning reduction where we utilize the state distribution of the expert to alleviate the global exploration component of the RL subroutine, providing an exponential speedup in theory.
2 code implementations • 6 Mar 2023 • Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu
We revisit the problem of differentially private squared error linear regression.
no code implementations • 2 Mar 2023 • Xin Gu, Gautam Kamath, Zhiwei Steven Wu
We give an algorithm for selecting a public dataset by measuring a low-dimensional subspace distance between gradients of the public and private examples.
1 code implementation • 22 Feb 2023 • Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance.
no code implementations • 16 Feb 2023 • Shengyuan Hu, Dung Daniel Ngo, Shuran Zheng, Virginia Smith, Zhiwei Steven Wu
Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.
no code implementations • 13 Feb 2023 • Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, Kenneth Holstein
In this paper, we identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
no code implementations • 25 Nov 2022 • Keegan Harris, Anish Agarwal, Chara Podimata, Zhiwei Steven Wu
Unlike this classical setting, we permit the units generating the panel data to be strategic, i. e. units may modify their pre-intervention outcomes in order to receive a more desirable intervention.
no code implementations • 8 Nov 2022 • Zhun Deng, He Sun, Zhiwei Steven Wu, Linjun Zhang, David C. Parkes
AI methods are used in societally important settings, ranging from credit to employment to housing, and it is crucial to provide fairness in regard to algorithmic decision making.
1 code implementation • 6 Nov 2022 • Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
no code implementations • 15 Sep 2022 • Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu
A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.
no code implementations • 19 Aug 2022 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
A variety of problems in econometrics and machine learning, including instrumental variable regression and Bellman residual minimization, can be formulated as satisfying a set of conditional moment restrictions (CMR).
1 code implementation • 3 Aug 2022 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
We consider imitation learning problems where the learner's ability to mimic the expert increases throughout the course of an episode as more information is revealed.
1 code implementation • 16 Jun 2022 • Ziyu Liu, Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
While the application of differential privacy (DP) has been well-studied in cross-device federated learning (FL), there is a lack of work considering DP and its implications for cross-silo FL, a setting characterized by a limited number of clients each containing many data subjects.
no code implementations • 15 Jun 2022 • Justin Whitehouse, Zhiwei Steven Wu, Aaditya Ramdas, Ryan Rogers
In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism.
no code implementations • 13 Jun 2022 • Terrance Liu, Zhiwei Steven Wu
Moreover, it has not yet been established how one can generate synthetic data at both the group and individual-level while capturing such statistics.
no code implementations • 1 Jun 2022 • Xinyan Hu, Dung Daniel Ngo, Aleksandrs Slivkins, Zhiwei Steven Wu
The users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations.
1 code implementation • 30 May 2022 • Gokul Swamy, Nived Rajaraman, Matthew Peng, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu, Jiantao Jiao, Kannan Ramchandran
In the tabular setting or with linear function approximation, our meta theorem shows that the performance gap incurred by our approach achieves the optimal $\widetilde{O} \left( \min({H^{3/2}} / {N}, {H} / {\sqrt{N}} \right)$ dependency, under significantly weaker assumptions compared to prior work.
no code implementations • 27 May 2022 • Maria-Florina Balcan, Keegan Harris, Mikhail Khodak, Zhiwei Steven Wu
We study online learning with bandit feedback across multiple tasks, with the goal of improving average performance across tasks if they are similar according to some natural task-similarity measure.
no code implementations • 18 May 2022 • Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu
In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.
no code implementations • 13 May 2022 • Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu
Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems.
no code implementations • 5 Apr 2022 • Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, Kenneth Holstein
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts.
no code implementations • 18 Mar 2022 • Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
no code implementations • 10 Mar 2022 • Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu
However, these results require that the privacy parameters of all algorithms be fixed before interacting with the data.
1 code implementation • 17 Feb 2022 • Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP).
1 code implementation • 10 Feb 2022 • Alberto Bietti, Chen-Yu Wei, Miroslav Dudík, John Langford, Zhiwei Steven Wu
Large-scale machine learning systems often involve data distributed across a collection of users.
no code implementations • 2 Feb 2022 • Dung Daniel Ngo, Giuseppe Vietri, Zhiwei Steven Wu
We study privacy-preserving exploration in sequential decision-making for environments that rely on sensitive data such as medical records.
1 code implementation • 2 Feb 2022 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
We develop algorithms for imitation learning from policy data that was corrupted by temporally correlated noise in expert actions.
1 code implementation • 28 Jan 2022 • Zuxin Liu, Zhepeng Cen, Vladislav Isenbaev, Wei Liu, Zhiwei Steven Wu, Bo Li, Ding Zhao
Safe reinforcement learning (RL) aims to learn policies that satisfy certain constraints before deploying them to safety-critical applications.
no code implementations • 12 Dec 2021 • Keegan Harris, Valerie Chen, Joon Sik Kim, Ameet Talwalkar, Hoda Heidari, Zhiwei Steven Wu
While the decision maker's problem of finding the optimal Bayesian incentive-compatible (BIC) signaling policy takes the form of optimization over infinitely-many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules.
no code implementations • 5 Oct 2021 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
Recent work by Jarrett et al. attempts to frame the problem of offline imitation learning (IL) as one of learning a joint energy-based model, with the hope of out-performing standard behavioral cloning.
1 code implementation • 30 Aug 2021 • Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously.
1 code implementation • 21 Jul 2021 • Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu
In rounds, a social planner interacts with a sequence of heterogeneous agents who arrive with their unobserved private type that determines both their prior preferences across the actions (e. g., control and treatment) and their baseline rewards without taking any treatment.
1 code implementation • 12 Jul 2021 • Keegan Harris, Daniel Ngo, Logan Stapleton, Hoda Heidari, Zhiwei Steven Wu
In settings where Machine Learning (ML) algorithms automate or inform consequential decisions about people, individual decision subjects are often incentivized to strategically modify their observable attributes to receive more favorable predictions.
no code implementations • 25 Jun 2021 • Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi
Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.
1 code implementation • NeurIPS 2021 • Terrance Liu, Giuseppe Vietri, Zhiwei Steven Wu
We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries.
no code implementations • NeurIPS 2021 • Keegan Harris, Hoda Heidari, Zhiwei Steven Wu
In particular, we consider settings in which the agent's effort investment today can accumulate over time in the form of an internal state - impacting both his future rewards and that of the principal.
2 code implementations • 4 Mar 2021 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching.
no code implementations • 1 Mar 2021 • Yahav Bechavod, Chara Podimata, Zhiwei Steven Wu, Juba Ziani
We initiate the study of the effects of non-transparency in decision rules on individuals' ability to improve in strategic learning settings.
no code implementations • 21 Feb 2021 • Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Zhiwei Steven Wu, Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner.
1 code implementation • 17 Feb 2021 • Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, Zhiwei Steven Wu
In many statistical problems, incorporating priors can significantly improve performance.
no code implementations • 22 Oct 2020 • Hong Shen, Wesley Hanwen Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, Haiyi Zhu
In this paper, we present Value Card, an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.
no code implementations • 18 Sep 2020 • Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Zhiwei Steven Wu
Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).
1 code implementation • 26 Aug 2020 • Zheyuan Ryan Shi, Zhiwei Steven Wu, Rayid Ghani, Fei Fang
In this paper, we introduce bandit data-driven optimization, the first iterative prediction-prescription framework to address these pain points.
1 code implementation • ICLR 2021 • Marcel Neunhoeffer, Zhiwei Steven Wu, Cynthia Dwork
We also provide a non-private variant of PGB that improves the data quality of standard GAN training.
no code implementations • 20 Jul 2020 • Guy Aridor, Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu
Users arrive one by one and choose between the two firms, so that each firm makes progress on its bandit problem only if it is chosen.
1 code implementation • ICML 2020 • Giuseppe Vietri, Grace Tian, Mark Bun, Thomas Steinke, Zhiwei Steven Wu
We present three new algorithms for constructing differentially private synthetic data---a sanitized version of a sensitive dataset that approximately preserves the answers to a large collection of statistical queries.
no code implementations • ICLR 2021 • Yingxue Zhou, Zhiwei Steven Wu, Arindam Banerjee
Existing lower bounds on private ERM show that such dependence on $p$ is inevitable in the worst case.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Zhiwei Steven Wu, Mingyi Hong
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
no code implementations • 24 Jun 2020 • Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee
We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.
no code implementations • 19 May 2020 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users in order to gain information that will lead to better decisions in the future.
no code implementations • ICML 2020 • Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Zhiwei Steven Wu
In comparison, with only private samples, this problem cannot be solved even for simple query classes with VC-dimension one, and without any private samples, a larger public sample of size $d/\alpha^2$ is needed.
no code implementations • ICML 2020 • Vidyashankar Sivakumar, Zhiwei Steven Wu, Arindam Banerjee
Bandit learning algorithms typically involve the balance of exploration and exploitation.
no code implementations • ICML 2020 • Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Zhiwei Steven Wu
We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy.
no code implementations • 21 Feb 2020 • Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, Huanyu Zhang
Absent privacy constraints, this problem requires $O(\log k)$ samples from $p$, and it was recently shown that the same complexity is achievable under (central) differential privacy.
no code implementations • 17 Feb 2020 • Yahav Bechavod, Katrina Ligett, Zhiwei Steven Wu, Juba Ziani
We consider an online regression setting in which individuals adapt to the regression model: arriving individuals are aware of the current model, and invest strategically in modifying their own features so as to improve the predicted score that the current model assigns to them.
no code implementations • NeurIPS 2020 • Yahav Bechavod, Christopher Jung, Zhiwei Steven Wu
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly.
no code implementations • 13 Feb 2020 • Vikas K. Garg, Adam Kalai, Katrina Ligett, Zhiwei Steven Wu
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains.
no code implementations • NeurIPS 2019 • Arindam Banerjee, Qilong Gu, Vidyashankar Sivakumar, Zhiwei Steven Wu
We also discuss stochastic process based forms of J-L, RIP, and sketching, to illustrate the generality of the results.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Tiancong Chen, Haoran Sun, Zhiwei Steven Wu, Mingyi Hong
We show that these algorithms are non-convergent whenever there is some disparity between the expected median and mean over the local gradients.
no code implementations • NeurIPS 2019 • Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu
The sample complexity of our basic algorithm is $O\left(\frac{\log m}{\alpha^2} + \frac{\log m}{\alpha \varepsilon}\right)$, representing a minimal cost for privacy when compared to the non-private algorithm.
4 code implementations • 30 May 2019 • Alekh Agarwal, Miroslav Dudík, Zhiwei Steven Wu
Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions.
1 code implementation • 25 May 2019 • Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.
no code implementations • 19 Feb 2019 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We consider Bayesian Exploration: a simple model in which the recommendation system (the "principal") controls the information flow to the users (the "agents") and strives to incentivize exploration via information asymmetry.
no code implementations • 14 Feb 2019 • Guy Aridor, Kevin Liu, Aleksandrs Slivkins, Zhiwei Steven Wu
We empirically study the interplay between exploration and competition.
1 code implementation • NeurIPS 2019 • Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.
no code implementations • 4 Dec 2018 • Brett K. Beaulieu-Jones, William Yuan, Samuel G. Finlayson, Zhiwei Steven Wu
Deep learning with medical data often requires larger samples sizes than are available at single providers.
no code implementations • NeurIPS 2019 • Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu
We study a basic private estimation problem: each of $n$ users draws a single i. i. d.
no code implementations • 19 Nov 2018 • Seth Neel, Aaron Roth, Zhiwei Steven Wu
We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle.
no code implementations • 14 Nov 2018 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We propose and design recommendation systems that incentivize efficient exploration.
5 code implementations • 24 Aug 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.
1 code implementation • 9 Jun 2018 • Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu
We provide a consistency rate and establish asymptotic normality for our estimator.
no code implementations • 1 Jun 2018 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm.
1 code implementation • 22 Mar 2018 • Aaron Schein, Zhiwei Steven Wu, Alexandra Schofield, Mingyuan Zhou, Hanna Wallach
We present a general method for privacy-preserving Bayesian inference in Poisson factorization, a broad class of models that includes some of the most widely used models in the social sciences.
2 code implementations • ICML 2018 • Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis
This paper studies semiparametric contextual bandits, a generalization of the linear stochastic bandit problem where the reward for an action is modeled as a linear function of known action features confounded by an non-linear action-independent term.
no code implementations • NeurIPS 2018 • Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation.
5 code implementations • ICML 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
no code implementations • 22 Oct 2017 • Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, Zhiwei Steven Wu
We study an online linear classification problem, in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome.
no code implementations • ICML 2017 • Michael Kearns, Aaron Roth, Zhiwei Steven Wu
We consider the problem of selecting a strong pool of individuals from several populations with incomparable skills (e. g. soccer players, mathematicians, and singers) in a fair manner.
no code implementations • 27 Feb 2017 • Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu
Most modern systems strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information.
no code implementations • 19 Jul 2016 • Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu
We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave.
no code implementations • 3 Jun 2016 • Michael Kearns, Zhiwei Steven Wu
We consider a new learning model in which a joint distribution over vector pairs $(x, y)$ is determined by an unknown function $c(x)$ that maps input vectors $x$ not to individual outputs, but to entire {\em distributions\/} over output vectors $y$.
no code implementations • 24 Feb 2016 • Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
no code implementations • 24 Feb 2016 • Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, Zhiwei Steven Wu
As a key technical tool, we introduce the concept of explorable actions, the actions which some incentive-compatible policy can recommend with non-zero probability.
no code implementations • NeurIPS 2016 • Shahin Jabbari, Ryan Rogers, Aaron Roth, Zhiwei Steven Wu
This models the problem of predicting the behavior of a rational agent whose goals are known, but whose resources are unknown.
no code implementations • 4 Apr 2015 • Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu
In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown.
no code implementations • 6 Feb 2014 • Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Zhiwei Steven Wu
We present a practical, differentially private algorithm for answering a large number of queries on high dimensional datasets.