With over 500 million tweets posted per day, in Twitter, it is difficult for Twitter users to discover interesting content from the deluge of uninteresting posts.
no code implementations • 21 Mar 2022 • Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau
In this work we propose a model-agnostic algorithm that generates counterfactual ensemble explanations for time series anomaly detection models.
The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.
With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial.
1 code implementation • 7 Sep 2021 • Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi
We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions.
In this work, we take a step towards finding influential training points that also represent the training data well.
Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.
With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.
Motivated by extensive literature in behavioral economics and behavioral psychology (prospect theory), we propose a notion of fair updates that we refer to as loss-averse updates.
Our framework defines a large number of concepts that the DNN explanations could be based on and performs the explanation-conformity check at test time to assess prediction robustness.
Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.
Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component.
Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans.
The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups.
Bringing transparency to black-box decision making systems (DMS) has been a topic of increasing research interest in recent years.
To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates.
Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services.