no code implementations • 17 Feb 2024 • Vijay Keswani, Anay Mehrotra, L. Elisa Celis
For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier.
1 code implementation • 4 Dec 2023 • Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi
In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM.
1 code implementation • NeurIPS 2023 • L. Elisa Celis, Amit Kumar, Anay Mehrotra, Nisheeth K. Vishnoi
We characterize the distributions that arise from our model and study the effect of the parameters on the observed distribution.
no code implementations • 21 Jun 2023 • Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis
Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature.
1 code implementation • 16 Jun 2023 • Niclas Boehmer, L. Elisa Celis, Lingxiao Huang, Anay Mehrotra, Nisheeth K. Vishnoi
We consider the problem of subset selection where one is given multiple rankings of items and the goal is to select the highest ``quality'' subset.
1 code implementation • 3 May 2023 • Anay Mehrotra, Nisheeth K. Vishnoi
In empirical evaluation, with both synthetic and real-world data, we observe that this algorithm improves the utility of the output subset for this family of submodular functions over baselines.
1 code implementation • 30 Nov 2022 • Anay Mehrotra, Nisheeth K. Vishnoi
The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature.
no code implementations • 3 Feb 2022 • Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi
Interventions such as the Rooney Rule and its generalizations, which require the decision maker to select at least a specified number of individuals from each affected group, have been proposed to mitigate the adverse effects of implicit bias in selection.
no code implementations • 24 Nov 2021 • Hortense Fong, Vineet Kumar, Anay Mehrotra, Nisheeth K. Vishnoi
We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.
1 code implementation • NeurIPS 2021 • L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
2 code implementations • 9 Nov 2020 • Anay Mehrotra, L. Elisa Celis
Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race.
1 code implementation • 21 Oct 2020 • L. Elisa Celis, Chris Hays, Anay Mehrotra, Nisheeth K. Vishnoi
Our main result is that, when the panel is constrained by the Rooney Rule, their implicit bias roughly reduces at a rate that is the inverse of the size of the shortlist--independent of the number of candidates, whereas without the Rooney Rule, the rate is inversely proportional to the number of candidates.
no code implementations • 23 Jan 2020 • L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi
Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e. g., defined by gender or race).
1 code implementation • 29 Jan 2019 • L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi
To prevent this, we propose a constrained ad auction framework that maximizes the platform's revenue conditioned on ensuring that the audience seeing an advertiser's ad is distributed appropriately across sensitive types such as gender or race.