Search Results for author: Vijay Keswani

Found 15 papers, 9 papers with code

Fair Classification with Partial Feedback: An Exploration-Based Data-Collection Approach

no code implementations17 Feb 2024 Vijay Keswani, Anay Mehrotra, L. Elisa Celis

For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier.

Fairness

Designing Closed-Loop Models for Task Allocation

1 code implementation31 May 2023 Vijay Keswani, L. Elisa Celis, Krishnaram Kenthapadi, Matthew Lease

Instead, we find ourselves in a "closed" decision-making loop in which the same fallible human decisions we rely on in practice must also be used to guide task allocation.

Decision Making

Addressing Strategic Manipulation Disparities in Fair Classification

no code implementations22 May 2022 Vijay Keswani, L. Elisa Celis

In real-world classification settings, such as loan application evaluation or content moderation on online platforms, individuals respond to classifier predictions by strategically updating their features to increase their likelihood of receiving a particular (positive) decision (at a certain cost).

Classification Fairness

Designing Closed Human-in-the-loop Deferral Pipelines

1 code implementation9 Feb 2022 Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi

Our key insight is that by exploiting weak prior information, we can match experts to input examples to ensure fairness and accuracy of the resulting deferral framework, even when imperfect and biased experts are used in place of ground truth labels.

Decision Making Fairness

Auditing for Diversity using Representative Examples

1 code implementation15 Jul 2021 Vijay Keswani, L. Elisa Celis

Our proposed algorithm uses the pairwise similarity between elements in the dataset and elements in the control set to effectively bootstrap an approximation to the disparity of the dataset.

Attribute

Towards Unbiased and Accurate Deferral to Multiple Experts

1 code implementation25 Feb 2021 Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi

Machine learning models are often implemented in cohort with humans in the pipeline, with the model having an option to defer to a domain expert in cases where it has low confidence in its inference.

BIG-bench Machine Learning Fairness

Dialect Diversity in Text Summarization on Twitter

no code implementations15 Jul 2020 Vijay Keswani, L. Elisa Celis

Discussions on Twitter involve participation from different communities with different dialects and it is often necessary to summarize a large number of posts into a representative sample to provide a synopsis.

Attribute Extractive Summarization +2

A Convergent and Dimension-Independent Min-Max Optimization Algorithm

2 code implementations22 Jun 2020 Vijay Keswani, Oren Mangoubi, Sushant Sachdeva, Nisheeth K. Vishnoi

The equilibrium point found by our algorithm depends on the proposal distribution, and when applying our algorithm to train GANs we choose the proposal distribution to be a distribution of stochastic gradients.

Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees

1 code implementation8 Jun 2020 L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi

We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes.

Fairness General Classification

Data preprocessing to mitigate bias: A maximum entropy based approach

1 code implementation ICML 2020 L. Elisa Celis, Vijay Keswani, Nisheeth K. Vishnoi

Unlike prior work, it can efficiently learn distributions over large domains, controllably adjust the representation rates of protected groups and achieve target fairness metrics such as statistical parity, yet remains close to the empirical distribution induced by the given dataset.

Fairness

Improved Adversarial Learning for Fair Classification

no code implementations29 Jan 2019 L. Elisa Celis, Vijay Keswani

Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research.

BIG-bench Machine Learning Classification +2

Implicit Diversity in Image Summarization

no code implementations29 Jan 2019 L. Elisa Celis, Vijay Keswani

We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to select a set of images of people in response to a query.

Attribute Image Retrieval

Balanced News Using Constrained Bandit-based Personalization

no code implementations24 Jun 2018 Sayash Kapoor, Vijay Keswani, Nisheeth K. Vishnoi, L. Elisa Celis

We present a prototype for a news search engine that presents balanced viewpoints across liberal and conservative articles with the goal of de-polarizing content and allowing users to escape their filter bubble.

Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees

4 code implementations15 Jun 2018 L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi

The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees.

Classification Fairness +1

Fair and Diverse DPP-based Data Summarization

1 code implementation ICML 2018 L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi

Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization.

Data Summarization Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.