Search Results for author: Julius Adebayo

Found 11 papers, 6 papers with code

Error Discovery by Clustering Influence Embeddings

no code implementations NeurIPS 2023 Fulton Wang, Julius Adebayo, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan

We present a method for identifying groups of test examples -- slices -- on which a model under-performs, a task now known as slice discovery.

Clustering

Quantifying and mitigating the impact of label errors on model disparity metrics

no code implementations4 Oct 2023 Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern

We empirically assess the proposed approach on a variety of datasets and find significant improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric.

Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation

no code implementations ICLR 2022 Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim

We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data.

Debugging Tests for Model Explanations

1 code implementation NeurIPS 2020 Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination).

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

1 code implementation6 Aug 2020 Nishanth Arun, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, Sharut Gupta, Jay Patel, Mishka Gidwani, Julius Adebayo, Matthew D. Li, Jayashree Kalpathy-Cramer

Saliency maps have become a widely used method to make deep learning models more interpretable by providing post-hoc explanations of classifiers through identification of the most pertinent areas of the input medical image.

SSIM

Explaining Explanations to Society

no code implementations19 Jan 2019 Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, Julius Adebayo

We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.

Decision Making Explainable Artificial Intelligence (XAI)

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

2 code implementations8 Oct 2018 Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning.

BIG-bench Machine Learning

Investigating Human + Machine Complementarity for Recidivism Predictions

no code implementations28 Aug 2018 Sarah Tan, Julius Adebayo, Kori Inkpen, Ece Kamar

Dressel and Farid (2018) asked Mechanical Turk workers to evaluate a subset of defendants in the ProPublica COMPAS data for risk of recidivism, and concluded that COMPAS predictions were no more accurate or fair than predictions made by humans.

Decision Making Fairness

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

1 code implementation15 Nov 2016 Julius Adebayo, Lalana Kagal

Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.