Search Results for author: Gregory Plumb

Found 11 papers, 6 papers with code

Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms

2 code implementations13 Jun 2023 Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar

Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.

object-detection Object Detection

Towards a More Rigorous Science of Blindspot Discovery in Image Classification Models

2 code implementations8 Jul 2022 Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar

A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.

Dimensionality Reduction Image Classification

Use-Case-Grounded Simulations for Explanation Evaluation

no code implementations5 Jun 2022 Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar

SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.

counterfactual Counterfactual Reasoning

Finding and Fixing Spurious Patterns with Explanations

no code implementations3 Jun 2021 Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar

Image classifiers often use spurious patterns, such as "relying on the presence of a person to detect a tennis racket, which do not generalize.

Data Augmentation

Sanity Simulations for Saliency Methods

1 code implementation13 May 2021 Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image.

Benchmarking

Interpretable Machine Learning: Moving From Mythos to Diagnostics

no code implementations10 Mar 2021 Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases.

BIG-bench Machine Learning Interpretable Machine Learning

A Learning Theoretic Perspective on Local Explainability

no code implementations ICLR 2021 Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar

In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations.

BIG-bench Machine Learning Interpretable Machine Learning +1

Explaining Groups of Points in Low-Dimensional Representations

3 code implementations ICML 2020 Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar

A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.

counterfactual Counterfactual Explanation +1

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

no code implementations31 May 2019 Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.

BIG-bench Machine Learning Interpretable Machine Learning

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

BIG-bench Machine Learning Interpretable Machine Learning

Model Agnostic Supervised Local Explanations

2 code implementations NeurIPS 2018 Gregory Plumb, Denali Molitor, Ameet Talwalkar

Some of the most common forms of interpretability systems are example-based, local, and global explanations.

feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.