no code implementations • 18 May 2022 • Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu
In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.
no code implementations • 13 May 2022 • Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu
Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems.
no code implementations • 5 Apr 2022 • Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, Kenneth Holstein
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts.
no code implementations • 18 Mar 2022 • Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
Our work provides a new definition for group fairness in federated learning based on the notion of Bounded Group Loss (BGL), which can be easily applied to common federated learning objectives.
no code implementations • 10 Mar 2022 • Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu
We construct filters that match the tightness of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.
no code implementations • 17 Feb 2022 • Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population parameters under the constraint of local differential privacy (LDP).
no code implementations • 10 Feb 2022 • Alberto Bietti, Chen-Yu Wei, Miroslav Dudik, John Langford, Zhiwei Steven Wu
Large-scale machine learning systems often involve data distributed across a collection of users.
no code implementations • 2 Feb 2022 • Dung Daniel Ngo, Giuseppe Vietri, Zhiwei Steven Wu
We study privacy-preserving exploration in sequential decision-making for environments that rely on sensitive data such as medical records.
1 code implementation • 2 Feb 2022 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
We develop algorithms for imitation learning from policy data that was corrupted by temporally correlated noise in expert actions.
no code implementations • 28 Jan 2022 • Zuxin Liu, Zhepeng Cen, Vladislav Isenbaev, Wei Liu, Zhiwei Steven Wu, Bo Li, Ding Zhao
Safe reinforcement learning (RL) aims to learn policies that satisfy certain constraints before deploying to safety-critical applications.
no code implementations • 12 Dec 2021 • Keegan Harris, Valerie Chen, Joon Sik Kim, Ameet Talwalkar, Hoda Heidari, Zhiwei Steven Wu
While the decision maker's problem of finding the optimal Bayesian incentive-compatible (BIC) signaling policy takes the form of optimization over infinitely-many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules.
no code implementations • 5 Oct 2021 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
Recent work by Jarrett et al. attempts to frame the problem of offline imitation learning (IL) as one of learning a joint energy-based model, with the hope of out-performing standard behavioral cloning.
no code implementations • 30 Aug 2021 • Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously.
1 code implementation • 21 Jul 2021 • Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu
In rounds, a social planner interacts with a sequence of heterogeneous agents who arrive with their unobserved private type that determines both their prior preferences across the actions (e. g., control and treatment) and their baseline rewards without taking any treatment.
1 code implementation • 12 Jul 2021 • Keegan Harris, Daniel Ngo, Logan Stapleton, Hoda Heidari, Zhiwei Steven Wu
In settings where Machine Learning (ML) algorithms automate or inform consequential decisions about people, individual decision subjects are often incentivized to strategically modify their observable attributes to receive more favorable predictions.
no code implementations • 25 Jun 2021 • Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi
Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.
1 code implementation • NeurIPS 2021 • Terrance Liu, Giuseppe Vietri, Zhiwei Steven Wu
We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries.
no code implementations • NeurIPS 2021 • Keegan Harris, Hoda Heidari, Zhiwei Steven Wu
In particular, we consider settings in which the agent's effort investment today can accumulate over time in the form of an internal state - impacting both his future rewards and that of the principal.
1 code implementation • 4 Mar 2021 • Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu
We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching.
no code implementations • 1 Mar 2021 • Yahav Bechavod, Chara Podimata, Zhiwei Steven Wu, Juba Ziani
We initiate the study of the effects of non-transparency in decision rules on individuals' ability to improve in strategic learning settings.
no code implementations • 21 Feb 2021 • Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Zhiwei Steven Wu, Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner.
1 code implementation • 17 Feb 2021 • Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, Zhiwei Steven Wu
In many statistical problems, incorporating priors can significantly improve performance.
no code implementations • 22 Oct 2020 • Hong Shen, Hanwen Wesley Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, Haiyi Zhu
In this paper, we present Value Card, an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.
no code implementations • 18 Sep 2020 • Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Zhiwei Steven Wu
Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).
1 code implementation • 26 Aug 2020 • Zheyuan Ryan Shi, Zhiwei Steven Wu, Rayid Ghani, Fei Fang
In this paper, we introduce bandit data-driven optimization, the first iterative prediction-prescription framework to address these pain points.
1 code implementation • ICLR 2021 • Marcel Neunhoeffer, Zhiwei Steven Wu, Cynthia Dwork
We also provide a non-private variant of PGB that improves the data quality of standard GAN training.
no code implementations • 20 Jul 2020 • Guy Aridor, Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu
Users arrive one by one and choose between the two firms, so that each firm makes progress on its bandit problem only if it is chosen.
1 code implementation • ICML 2020 • Giuseppe Vietri, Grace Tian, Mark Bun, Thomas Steinke, Zhiwei Steven Wu
We present three new algorithms for constructing differentially private synthetic data---a sanitized version of a sensitive dataset that approximately preserves the answers to a large collection of statistical queries.
no code implementations • ICLR 2021 • Yingxue Zhou, Zhiwei Steven Wu, Arindam Banerjee
Existing lower bounds on private ERM show that such dependence on $p$ is inevitable in the worst case.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Zhiwei Steven Wu, Mingyi Hong
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
no code implementations • 24 Jun 2020 • Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee
We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.
no code implementations • 19 May 2020 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users in order to gain information that will lead to better decisions in the future.
no code implementations • ICML 2020 • Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Zhiwei Steven Wu
In comparison, with only private samples, this problem cannot be solved even for simple query classes with VC-dimension one, and without any private samples, a larger public sample of size $d/\alpha^2$ is needed.
no code implementations • ICML 2020 • Vidyashankar Sivakumar, Zhiwei Steven Wu, Arindam Banerjee
Bandit learning algorithms typically involve the balance of exploration and exploitation.
no code implementations • ICML 2020 • Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Zhiwei Steven Wu
We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy.
no code implementations • 21 Feb 2020 • Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, Huanyu Zhang
Absent privacy constraints, this problem requires $O(\log k)$ samples from $p$, and it was recently shown that the same complexity is achievable under (central) differential privacy.
no code implementations • 17 Feb 2020 • Yahav Bechavod, Katrina Ligett, Zhiwei Steven Wu, Juba Ziani
We consider an online regression setting in which individuals adapt to the regression model: arriving individuals are aware of the current model, and invest strategically in modifying their own features so as to improve the predicted score that the current model assigns to them.
no code implementations • NeurIPS 2020 • Yahav Bechavod, Christopher Jung, Zhiwei Steven Wu
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly.
no code implementations • 13 Feb 2020 • Vikas K. Garg, Adam Kalai, Katrina Ligett, Zhiwei Steven Wu
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains.
no code implementations • NeurIPS 2019 • Arindam Banerjee, Qilong Gu, Vidyashankar Sivakumar, Zhiwei Steven Wu
We also discuss stochastic process based forms of J-L, RIP, and sketching, to illustrate the generality of the results.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Tiancong Chen, Haoran Sun, Zhiwei Steven Wu, Mingyi Hong
We show that these algorithms are non-convergent whenever there is some disparity between the expected median and mean over the local gradients.
3 code implementations • 30 May 2019 • Alekh Agarwal, Miroslav Dudík, Zhiwei Steven Wu
Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions.
no code implementations • NeurIPS 2019 • Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu
The sample complexity of our basic algorithm is $O\left(\frac{\log m}{\alpha^2} + \frac{\log m}{\alpha \varepsilon}\right)$, representing a minimal cost for privacy when compared to the non-private algorithm.
1 code implementation • 25 May 2019 • Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.
no code implementations • 19 Feb 2019 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We consider Bayesian Exploration: a simple model in which the recommendation system (the "principal") controls the information flow to the users (the "agents") and strives to incentivize exploration via information asymmetry.
no code implementations • 14 Feb 2019 • Guy Aridor, Kevin Liu, Aleksandrs Slivkins, Zhiwei Steven Wu
We empirically study the interplay between exploration and competition.
1 code implementation • NeurIPS 2019 • Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.
no code implementations • 4 Dec 2018 • Brett K. Beaulieu-Jones, William Yuan, Samuel G. Finlayson, Zhiwei Steven Wu
Deep learning with medical data often requires larger samples sizes than are available at single providers.
no code implementations • NeurIPS 2019 • Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu
We study a basic private estimation problem: each of $n$ users draws a single i. i. d.
no code implementations • 19 Nov 2018 • Seth Neel, Aaron Roth, Zhiwei Steven Wu
We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle.
no code implementations • 14 Nov 2018 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We design a policy with optimal regret in the worst case over reward distributions.
5 code implementations • 24 Aug 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.
1 code implementation • 9 Jun 2018 • Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu
We provide a consistency rate and establish asymptotic normality for our estimator.
no code implementations • 1 Jun 2018 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm.
1 code implementation • 22 Mar 2018 • Aaron Schein, Zhiwei Steven Wu, Alexandra Schofield, Mingyuan Zhou, Hanna Wallach
We present a general method for privacy-preserving Bayesian inference in Poisson factorization, a broad class of models that includes some of the most widely used models in the social sciences.
2 code implementations • ICML 2018 • Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis
This paper studies semiparametric contextual bandits, a generalization of the linear stochastic bandit problem where the reward for an action is modeled as a linear function of known action features confounded by an non-linear action-independent term.
no code implementations • NeurIPS 2018 • Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation.
5 code implementations • ICML 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
no code implementations • 22 Oct 2017 • Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, Zhiwei Steven Wu
We study an online linear classification problem, in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome.
no code implementations • ICML 2017 • Michael Kearns, Aaron Roth, Zhiwei Steven Wu
We consider the problem of selecting a strong pool of individuals from several populations with incomparable skills (e. g. soccer players, mathematicians, and singers) in a fair manner.
no code implementations • 27 Feb 2017 • Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu
Most modern systems strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information.
no code implementations • 19 Jul 2016 • Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu
We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave.
no code implementations • 3 Jun 2016 • Michael Kearns, Zhiwei Steven Wu
We consider a new learning model in which a joint distribution over vector pairs $(x, y)$ is determined by an unknown function $c(x)$ that maps input vectors $x$ not to individual outputs, but to entire {\em distributions\/} over output vectors $y$.
no code implementations • 24 Feb 2016 • Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
no code implementations • 24 Feb 2016 • Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, Zhiwei Steven Wu
As a key technical tool, we introduce the concept of explorable actions, the actions which some incentive-compatible policy can recommend with non-zero probability.
no code implementations • NeurIPS 2016 • Shahin Jabbari, Ryan Rogers, Aaron Roth, Zhiwei Steven Wu
This models the problem of predicting the behavior of a rational agent whose goals are known, but whose resources are unknown.
no code implementations • 4 Apr 2015 • Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu
In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown.
no code implementations • 6 Feb 2014 • Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Zhiwei Steven Wu
We present a practical, differentially private algorithm for answering a large number of queries on high dimensional datasets.