no code implementations • 19 Dec 2023 • Ben Snyder, Marius Moisescu, Muhammad Bilal Zafar
Building on this insight, we train binary classifiers that use these artifacts as input features to classify model generations into hallucinations and non-hallucinations.
1 code implementation • 26 Feb 2023 • Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar
We revisit the problem of fair principal component analysis (PCA), where the goal is to learn the best low-rank linear approximation of the data that obfuscates demographic information.
no code implementations • 23 Dec 2022 • Parantapa Bhattacharya, Saptarshi Ghosh, Muhammad Bilal Zafar, Soumya K. Ghosh, Niloy Ganguly
With over 500 million tweets posted per day, in Twitter, it is difficult for Twitter users to discover interesting content from the deluge of uninteresting posts.
no code implementations • 21 Mar 2022 • Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau
In this work we propose a model-agnostic algorithm that generates counterfactual ensemble explanations for time series anomaly detection models.
no code implementations • 23 Dec 2021 • Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi
The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.
no code implementations • 26 Nov 2021 • David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi
With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial.
1 code implementation • 7 Sep 2021 • Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi
We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions.
1 code implementation • 13 Jul 2021 • Umang Bhatt, Isabel Chien, Muhammad Bilal Zafar, Adrian Weller
In this work, we take a step towards finding influential training points that also represent the training data well.
2 code implementations • 23 Jun 2021 • Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau
Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.
no code implementations • Findings (ACL) 2021 • Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi
With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.
no code implementations • 10 May 2021 • Junaid Ali, Muhammad Bilal Zafar, Adish Singla, Krishna P. Gummadi
Motivated by extensive literature in behavioral economics and behavioral psychology (prospect theory), we propose a notion of fair updates that we refer to as loss-averse updates.
1 code implementation • 7 May 2021 • Matthäus Kleindessner, Samira Samadi, Muhammad Bilal Zafar, Krishnaram Kenthapadi, Chris Russell
We initiate the study of fairness for ordinal regression.
no code implementations • 1 Jul 2020 • Vedant Nanda, Till Speicher, John P. Dickerson, Krishna P. Gummadi, Muhammad Bilal Zafar
Our framework defines a large number of concepts that the DNN explanations could be based on and performs the explanation-conformity check at test time to assess prediction robustness.
no code implementations • 9 Jun 2020 • Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau
Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.
no code implementations • 2 Jul 2018 • Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, Muhammad Bilal Zafar
Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component.
1 code implementation • NeurIPS 2017 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, Adrian Weller
The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups.
no code implementations • 30 Jun 2017 • Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi, Adrian Weller
Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans.
no code implementations • 31 Oct 2016 • Miguel Ferreira, Muhammad Bilal Zafar, Krishna P. Gummadi
Bringing transparency to black-box decision making systems (DMS) has been a topic of increasing research interest in recent years.
3 code implementations • 26 Oct 2016 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates.
2 code implementations • 19 Jul 2015 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services.