2 code implementations • 13 Jun 2023 • Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar
Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.
2 code implementations • 8 Jul 2022 • Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar
A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.
no code implementations • 5 Jun 2022 • Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar
SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.
no code implementations • 3 Jun 2021 • Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar
Image classifiers often use spurious patterns, such as "relying on the presence of a person to detect a tennis racket, which do not generalize.
1 code implementation • 13 May 2021 • Joon Sik Kim, Gregory Plumb, Ameet Talwalkar
Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image.
no code implementations • 10 Mar 2021 • Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar
Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases.
no code implementations • ICLR 2021 • Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar
In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations.
BIG-bench Machine Learning Interpretable Machine Learning +1
3 code implementations • ICML 2020 • Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar
A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.
no code implementations • 31 May 2019 • Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.
1 code implementation • NeurIPS 2020 • Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.
2 code implementations • NeurIPS 2018 • Gregory Plumb, Denali Molitor, Ameet Talwalkar
Some of the most common forms of interpretability systems are example-based, local, and global explanations.