Search Results for author: Nari Johnson

Found 7 papers, 3 papers with code

Assessing AI Impact Assessments: A Classroom Study

no code implementations19 Nov 2023 Nari Johnson, Hoda Heidari

Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.

Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms

2 code implementations13 Jun 2023 Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar

Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.

object-detection Object Detection

Towards a More Rigorous Science of Blindspot Discovery in Image Classification Models

2 code implementations8 Jul 2022 Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar

A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.

Dimensionality Reduction Image Classification

OpenXAI: Towards a Transparent Evaluation of Model Explanations

2 code implementations22 Jun 2022 Chirag Agarwal, Dan Ley, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets.

Benchmarking Explainable Artificial Intelligence (XAI) +1

Use-Case-Grounded Simulations for Explanation Evaluation

no code implementations5 Jun 2022 Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar

SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.

counterfactual Counterfactual Reasoning

Rethinking Stability for Attribution-based Explanations

no code implementations14 Mar 2022 Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju

As attribution-based explanation methods are increasingly used to establish model trustworthiness in high-stakes situations, it is critical to ensure that these explanations are stable, e. g., robust to infinitesimal perturbations to an input.

Learning Predictive and Interpretable Timeseries Summaries from ICU Data

no code implementations22 Sep 2021 Nari Johnson, Sonali Parbhoo, Andrew Slavin Ross, Finale Doshi-Velez

Machine learning models that utilize patient data across time (rather than just the most recent measurements) have increased performance for many risk stratification tasks in the intensive care unit.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.