1 code implementation • 26 Feb 2024 • Jeffrey G. Wang, Jason Wang, Marvin Li, Seth Neel
In fine-tuning, we find that given access to the loss of the fine-tuned and base models, a fine-tuned loss ratio attack FLoRA is able to achieve near perfect MIA peformance.
1 code implementation • 11 Dec 2023 • Seth Neel, Peter Chang
Specifically, we focus on work that red-teams models to highlight privacy risks, attempts to build privacy into the training or inference process, enables efficient data deletion from trained models to comply with existing privacy regulations, and tries to mitigate copyright issues.
no code implementations • 22 Oct 2023 • Marvin Li, Jason Wang, Jeffrey Wang, Seth Neel
In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training data of a pre-trained language model, given white-box access to the models parameters.
no code implementations • 18 Oct 2023 • Lukman Olagoke, Salil Vadhan, Seth Neel
In this paper we study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.
1 code implementation • 11 Oct 2023 • Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
In this work, we propose a new class of unlearning methods for LLMs we call ''In-Context Unlearning'', providing inputs in context and without having to update model parameters.
no code implementations • 7 Mar 2023 • Seth Neel
While naively applying private linear regression techniques $l$ times leads to a $\sqrt{l}$ multiplicative increase in error over the standard linear regression setting, in Subsection $4. 1$ we modify techniques based on sufficient statistics perturbation (SSP) to yield greatly improved dependence on $l$.
no code implementations • 3 Mar 2023 • Peter W. Chang, Leor Fishman, Seth Neel
It is widely held that one cause of downstream bias in classifiers is bias present in the training data.
1 code implementation • 10 Nov 2022 • Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals.
1 code implementation • NeurIPS 2021 • Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information.
2 code implementations • 6 Jul 2020 • Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models.
no code implementations • 12 Dec 2019 • Emily Diana, Michael Kearns, Seth Neel, Aaron Roth
We consider a fundamental dynamic allocation problem motivated by the problem of $\textit{securities lending}$ in financial markets, the mechanism underlying the short selling of stocks.
no code implementations • 9 Sep 2019 • Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld
This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
1 code implementation • 25 May 2019 • Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.
no code implementations • 7 Apr 2019 • Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth
Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.
no code implementations • 19 Nov 2018 • Seth Neel, Aaron Roth, Zhiwei Steven Wu
We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle.
no code implementations • 30 Aug 2018 • Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Zachary Schutzman
We formalize this fairness notion for allocation problems and investigate its algorithmic consequences.
5 code implementations • 24 Aug 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.
no code implementations • ICML 2018 • Seth Neel, Aaron Roth
Data that is gathered adaptively --- via bandit algorithms, for example --- exhibits bias.
no code implementations • NeurIPS 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Steven Z. Wu
Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
5 code implementations • ICML 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
1 code implementation • 7 Jun 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems.
1 code implementation • 30 May 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu
Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
no code implementations • 29 Oct 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We study fairness in linear bandit problems.