Search Results for author: Preethi Lahoti

Found 7 papers, 2 papers with code

Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning

no code implementations9 Sep 2021 Preethi Lahoti, Krishna P. Gummadi, Gerhard Weikum

Reliably predicting potential failure risks of machine learning (ML) systems when deployed with production data is a crucial aspect of trustworthy AI.

Accounting for Model Uncertainty in Algorithmic Discrimination

no code implementations10 May 2021 Junaid Ali, Preethi Lahoti, Krishna P. Gummadi

We further propose methods to achieve our goal of equalizing group error rates arising due to model uncertainty in algorithmic decision making and demonstrate the effectiveness of these methods using synthetic and real-world datasets.

Decision Making Fairness

Fairness without Demographics through Adversarially Reweighted Learning

3 code implementations NeurIPS 2020 Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns.

Fairness

An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision

1 code implementation22 Oct 2019 Hanchen Wang, Nina Grgic-Hlaca, Preethi Lahoti, Krishna P. Gummadi, Adrian Weller

We do not provide a way to directly learn a similarity metric satisfying the individual fairness, but to provide an empirical study on how to derive the similarity metric from human supervisors, then future work can use this as a tool to understand human supervision.

Fairness Metric Learning

Operationalizing Individual Fairness with Pairwise Fair Representations

no code implementations2 Jul 2019 Preethi Lahoti, Krishna P. Gummadi, Gerhard Weikum

We revisit the notion of individual fairness proposed by Dwork et al. A central challenge in operationalizing their approach is the difficulty in eliciting a human specification of a similarity metric.

Fairness

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

no code implementations4 Jun 2018 Preethi Lahoti, Krishna P. Gummadi, Gerhard Weikum

We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets.

Decision Making Fairness +1

Joint Non-negative Matrix Factorization for Learning Ideological Leaning on Twitter

no code implementations28 Nov 2017 Preethi Lahoti, Kiran Garimella, Aristides Gionis

We model the problem of learning the liberal-conservative ideology space of social media users and media sources as a constrained non-negative matrix-factorization problem.

Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.