Search Results for author: Anay Mehrotra

Found 14 papers, 9 papers with code

Fair Classification with Partial Feedback: An Exploration-Based Data-Collection Approach

no code implementations17 Feb 2024 Vijay Keswani, Anay Mehrotra, L. Elisa Celis

For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier.

Fairness

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

1 code implementation4 Dec 2023 Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi

In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM.

Navigate

Bias in Evaluation Processes: An Optimization-Based Model

1 code implementation NeurIPS 2023 L. Elisa Celis, Amit Kumar, Anay Mehrotra, Nisheeth K. Vishnoi

We characterize the distributions that arise from our model and study the effect of the parameters on the observed distribution.

Sampling Individually-Fair Rankings that are Always Group Fair

no code implementations21 Jun 2023 Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis

Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature.

Fairness Information Retrieval +2

Subset Selection Based On Multiple Rankings in the Presence of Bias: Effectiveness of Fairness Constraints for Multiwinner Voting Score Functions

1 code implementation16 Jun 2023 Niclas Boehmer, L. Elisa Celis, Lingxiao Huang, Anay Mehrotra, Nisheeth K. Vishnoi

We consider the problem of subset selection where one is given multiple rankings of items and the goal is to select the highest ``quality'' subset.

Fairness

Maximizing Submodular Functions for Recommendation in the Presence of Biases

1 code implementation3 May 2023 Anay Mehrotra, Nisheeth K. Vishnoi

In empirical evaluation, with both synthetic and real-world data, we observe that this algorithm improves the utility of the output subset for this family of submodular functions over baselines.

Fairness Recommendation Systems

Fair Ranking with Noisy Protected Attributes

1 code implementation30 Nov 2022 Anay Mehrotra, Nisheeth K. Vishnoi

The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature.

Fairness Information Retrieval +1

Selection in the Presence of Implicit Bias: The Advantage of Intersectional Constraints

no code implementations3 Feb 2022 Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi

Interventions such as the Rooney Rule and its generalizations, which require the decision maker to select at least a specified number of individuals from each affected group, have been proposed to mitigate the adverse effects of implicit bias in selection.

Fairness for AUC via Feature Augmentation

no code implementations24 Nov 2021 Hortense Fong, Vineet Kumar, Anay Mehrotra, Nisheeth K. Vishnoi

We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.

Fairness

Fair Classification with Adversarial Perturbations

1 code implementation NeurIPS 2021 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.

Classification Fairness +1

Mitigating Bias in Set Selection with Noisy Protected Attributes

2 code implementations9 Nov 2020 Anay Mehrotra, L. Elisa Celis

Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race.

Fairness Image Retrieval

The Effect of the Rooney Rule on Implicit Bias in the Long Term

1 code implementation21 Oct 2020 L. Elisa Celis, Chris Hays, Anay Mehrotra, Nisheeth K. Vishnoi

Our main result is that, when the panel is constrained by the Rooney Rule, their implicit bias roughly reduces at a rate that is the inverse of the size of the shortlist--independent of the number of candidates, whereas without the Rooney Rule, the rate is inversely proportional to the number of candidates.

Interventions for Ranking in the Presence of Implicit Bias

no code implementations23 Jan 2020 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e. g., defined by gender or race).

Toward Controlling Discrimination in Online Ad Auctions

1 code implementation29 Jan 2019 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

To prevent this, we propose a constrained ad auction framework that maximizes the platform's revenue conditioned on ensuring that the audience seeing an advertiser's ad is distributed appropriately across sensitive types such as gender or race.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.