no code implementations • 19 Nov 2023 • Nari Johnson, Hoda Heidari
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
2 code implementations • 13 Jun 2023 • Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar
Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.
2 code implementations • 8 Jul 2022 • Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar
A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.
2 code implementations • 22 Jun 2022 • Chirag Agarwal, Dan Ley, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju
OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets.
no code implementations • 5 Jun 2022 • Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar
SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.
no code implementations • 14 Mar 2022 • Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju
As attribution-based explanation methods are increasingly used to establish model trustworthiness in high-stakes situations, it is critical to ensure that these explanations are stable, e. g., robust to infinitesimal perturbations to an input.
no code implementations • 22 Sep 2021 • Nari Johnson, Sonali Parbhoo, Andrew Slavin Ross, Finale Doshi-Velez
Machine learning models that utilize patient data across time (rather than just the most recent measurements) have increased performance for many risk stratification tasks in the intensive care unit.