no code implementations • 17 Feb 2024 • Vijay Keswani, Anay Mehrotra, L. Elisa Celis
For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier.
1 code implementation • 31 May 2023 • Vijay Keswani, L. Elisa Celis, Krishnaram Kenthapadi, Matthew Lease
Instead, we find ourselves in a "closed" decision-making loop in which the same fallible human decisions we rely on in practice must also be used to guide task allocation.
no code implementations • 22 May 2022 • Vijay Keswani, L. Elisa Celis
In real-world classification settings, such as loan application evaluation or content moderation on online platforms, individuals respond to classifier predictions by strategically updating their features to increase their likelihood of receiving a particular (positive) decision (at a certain cost).
1 code implementation • 9 Feb 2022 • Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi
Our key insight is that by exploiting weak prior information, we can match experts to input examples to ensure fairness and accuracy of the resulting deferral framework, even when imperfect and biased experts are used in place of ground truth labels.
1 code implementation • 15 Jul 2021 • Vijay Keswani, L. Elisa Celis
Our proposed algorithm uses the pairwise similarity between elements in the dataset and elements in the control set to effectively bootstrap an approximation to the disparity of the dataset.
1 code implementation • 25 Feb 2021 • Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi
Machine learning models are often implemented in cohort with humans in the pipeline, with the model having an option to defer to a domain expert in cases where it has low confidence in its inference.
no code implementations • 15 Jul 2020 • Vijay Keswani, L. Elisa Celis
Discussions on Twitter involve participation from different communities with different dialects and it is often necessary to summarize a large number of posts into a representative sample to provide a synopsis.
2 code implementations • 22 Jun 2020 • Vijay Keswani, Oren Mangoubi, Sushant Sachdeva, Nisheeth K. Vishnoi
The equilibrium point found by our algorithm depends on the proposal distribution, and when applying our algorithm to train GANs we choose the proposal distribution to be a distribution of stochastic gradients.
1 code implementation • 8 Jun 2020 • L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi
We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes.
1 code implementation • ICML 2020 • L. Elisa Celis, Vijay Keswani, Nisheeth K. Vishnoi
Unlike prior work, it can efficiently learn distributions over large domains, controllably adjust the representation rates of protected groups and achieve target fairness metrics such as statistical parity, yet remains close to the empirical distribution induced by the given dataset.
no code implementations • 29 Jan 2019 • L. Elisa Celis, Vijay Keswani
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research.
no code implementations • 29 Jan 2019 • L. Elisa Celis, Vijay Keswani
We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to select a set of images of people in response to a query.
no code implementations • 24 Jun 2018 • Sayash Kapoor, Vijay Keswani, Nisheeth K. Vishnoi, L. Elisa Celis
We present a prototype for a news search engine that presents balanced viewpoints across liberal and conservative articles with the goal of de-polarizing content and allowing users to escape their filter bubble.
4 code implementations • 15 Jun 2018 • L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi
The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees.
1 code implementation • ICML 2018 • L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi
Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization.