Search Results for author: Julius Adebayo

Found 9 papers, 4 papers with code

Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation

no code implementations ICLR 2022 Julius Adebayo, Michael Muelly, Harold Abelson, Been Kim

Ascertaining that a deep network does not rely on an unknown spurious signal as basis for its output, prior to deployment, is crucial in high stakes settings like healthcare.

Debugging Tests for Model Explanations

no code implementations NeurIPS 2020 Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination).

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

1 code implementation6 Aug 2020 Nishanth Arun, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, Sharut Gupta, Jay Patel, Mishka Gidwani, Julius Adebayo, Matthew D. Li, Jayashree Kalpathy-Cramer

Saliency maps have become a widely used method to make deep learning models more interpretable by providing post-hoc explanations of classifiers through identification of the most pertinent areas of the input medical image.


Explaining Explanations to Society

no code implementations19 Jan 2019 Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, Julius Adebayo

We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.

Decision Making

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

no code implementations8 Oct 2018 Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning.

Investigating Human + Machine Complementarity for Recidivism Predictions

no code implementations28 Aug 2018 Sarah Tan, Julius Adebayo, Kori Inkpen, Ece Kamar

Dressel and Farid (2018) asked Mechanical Turk workers to evaluate a subset of defendants in the ProPublica COMPAS data for risk of recidivism, and concluded that COMPAS predictions were no more accurate or fair than predictions made by humans.

Decision Making Fairness

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

1 code implementation15 Nov 2016 Julius Adebayo, Lalana Kagal

Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment.


Cannot find the paper you are looking for? You can Submit a new open access paper.