no code implementations • NeurIPS 2023 • Fulton Wang, Julius Adebayo, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan
We present a method for identifying groups of test examples -- slices -- on which a model under-performs, a task now known as slice discovery.
no code implementations • 4 Oct 2023 • Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern
We empirically assess the proposed approach on a variety of datasets and find significant improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric.
no code implementations • ICLR 2022 • Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim
We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data.
1 code implementation • NeurIPS 2020 • Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim
For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination).
1 code implementation • 6 Aug 2020 • Nishanth Arun, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, Sharut Gupta, Jay Patel, Mishka Gidwani, Julius Adebayo, Matthew D. Li, Jayashree Kalpathy-Cramer
Saliency maps have become a widely used method to make deep learning models more interpretable by providing post-hoc explanations of classifiers through identification of the most pertinent areas of the input medical image.
no code implementations • 19 Jan 2019 • Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, Julius Adebayo
We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.
2 code implementations • 8 Oct 2018 • Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim
Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning.
5 code implementations • NeurIPS 2018 • Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim
We find that reliance, solely, on visual assessment can be misleading.
no code implementations • 28 Aug 2018 • Sarah Tan, Julius Adebayo, Kori Inkpen, Ece Kamar
Dressel and Farid (2018) asked Mechanical Turk workers to evaluate a subset of defendants in the ProPublica COMPAS data for risk of recidivism, and concluded that COMPAS predictions were no more accurate or fair than predictions made by humans.
1 code implementation • ICLR 2018 • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Saliency methods aim to explain the predictions of deep neural networks.
1 code implementation • 15 Nov 2016 • Julius Adebayo, Lalana Kagal
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment.