Search Results for author: Su-In Lee

Found 38 papers, 21 papers with code

Efficient Shapley Values for Attributing Global Properties of Diffusion Models to Data Group

no code implementations9 Jun 2024 Chris Lin, Mingyu Lu, Chanwoo Kim, Su-In Lee

As diffusion models are deployed in real-world settings, data attribution is needed to ensure fair acknowledgment for contributors of high-quality training data and to identify sources of harmful content.

Diversity

Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution

3 code implementations29 Jan 2024 Ian Covert, Chanwoo Kim, Su-In Lee, James Zou, Tatsunori Hashimoto

Many tasks in explainable machine learning, such as data valuation and feature attribution, perform expensive computation for each data point and are intractable for large datasets.

Data Valuation

Estimating Conditional Mutual Information for Dynamic Feature Selection

1 code implementation5 Jun 2023 Soham Gadgil, Ian Covert, Su-In Lee

Dynamic feature selection, where we sequentially query features to make accurate predictions with a minimal budget, is a promising paradigm to reduce feature acquisition costs and provide transparency into a model's predictions.

feature selection

Learning to Maximize Mutual Information for Dynamic Feature Selection

1 code implementation2 Jan 2023 Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, Su-In Lee

Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets.

feature selection Reinforcement Learning (RL)

Contrastive Corpus Attribution for Explaining Representations

1 code implementation30 Sep 2022 Chris Lin, Hugh Chen, Chanwoo Kim, Su-In Lee

To address this, we propose contrastive corpus similarity, a novel and semantically meaningful scalar explanation output based on a reference corpus and a contrasting foil set of samples.

Contrastive Learning Object Localization

Algorithms to estimate Shapley value feature attributions

1 code implementation15 Jul 2022 Hugh Chen, Ian C. Covert, Scott M. Lundberg, Su-In Lee

Based on the various feature removal approaches, we describe the multiple types of Shapley value feature attributions and methods to calculate each one.

Learning to Estimate Shapley Values with Vision Transformers

2 code implementations10 Jun 2022 Ian Covert, Chanwoo Kim, Su-In Lee

Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem.

A Deep Bayesian Bandits Approach for Anticancer Therapy: Exploration via Functional Prior

no code implementations5 May 2022 Mingyu Lu, Yifang Chen, Su-In Lee

Learning personalized cancer treatment with machine learning holds great promise to improve cancer patients' chance of survival.

BIG-bench Machine Learning Drug Response Prediction

Moment Matching Deep Contrastive Latent Variable Models

1 code implementation21 Feb 2022 Ethan Weinberger, Nicasia Beebe-Wang, Su-In Lee

In the contrastive analysis (CA) setting, machine learning practitioners are specifically interested in discovering patterns that are enriched in a target dataset as compared to a background dataset generated from sources of variation irrelevant to the task at hand.

FastSHAP: Real-Time Shapley Value Estimation

5 code implementations ICLR 2022 Neil Jethani, Mukund Sudarshan, Ian Covert, Su-In Lee, Rajesh Ranganath

Shapley values are widely used to explain black-box models, but they are costly to calculate because they require many model evaluations.

Pitfalls of Explainable ML: An Industry Perspective

no code implementations14 Jun 2021 Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee

The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.

Explainable Artificial Intelligence (XAI)

Explaining a Series of Models by Propagating Shapley Values

no code implementations30 Apr 2021 Hugh Chen, Scott M. Lundberg, Su-In Lee

Local feature attribution methods are increasingly used to explain complex machine learning models.

Mortality Prediction

Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression

4 code implementations2 Dec 2020 Ian Covert, Su-In Lee

The Shapley value concept from cooperative game theory has become a popular technique for interpreting ML models, but efficiently estimating these values remains challenging, particularly in the model-agnostic setting.

regression

Explaining by Removing: A Unified Framework for Model Explanation

3 code implementations21 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

We describe a new unified class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence.

counterfactual Counterfactual Reasoning

Feature Removal Is a Unifying Principle for Model Explanation Methods

1 code implementation6 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another.

True to the Model or True to the Data?

no code implementations29 Jun 2020 Hugh Chen, Joseph D. Janizek, Scott Lundberg, Su-In Lee

Furthermore, we argue that the choice comes down to whether it is desirable to be true to the model or true to the data.

BIG-bench Machine Learning

Understanding Global Feature Contributions With Additive Importance Measures

3 code implementations NeurIPS 2020 Ian Covert, Scott Lundberg, Su-In Lee

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.

Feature Importance

Forecasting adverse surgical events using self-supervised transfer learning for physiological signals

no code implementations12 Feb 2020 Hugh Chen, Scott Lundberg, Gabe Erion, Jerry H. Kim, Su-In Lee

Here, we present a transferable embedding method (i. e., a method to transform time series signals into input features for predictive machine learning models) named PHASE (PHysiologicAl Signal Embeddings) that enables us to more accurately forecast adverse surgical outcomes based on physiological signals.

Time Series Time Series Analysis +1

Explaining Explanations: Axiomatic Feature Interactions for Deep Networks

2 code implementations10 Feb 2020 Joseph D. Janizek, Pascal Sturmfels, Su-In Lee

Integrated Hessians overcomes several theoretical limitations of previous methods to explain interactions, and unlike such previous methods is not limited to a specific architecture or class of neural network.

An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs

1 code implementation13 Jan 2020 Joseph D. Janizek, Gabriel Erion, Alex J. DeGrave, Su-In Lee

In order for these models to be safely deployed, we would like to ensure that they do not use confounding variables to make their classification, and that they will work well even when tested on images from hospitals that were not included in the training data.

General Classification Robust classification

Learning Deep Attribution Priors Based On Prior Knowledge

no code implementations NeurIPS 2020 Ethan Weinberger, Joseph Janizek, Su-In Lee

In real-world problems we often have sets of additional information for each feature that are predictive of that feature's importance to the task at hand.

Feature Importance

Explaining Models by Propagating Shapley Values of Local Components

no code implementations27 Nov 2019 Hugh Chen, Scott Lundberg, Su-In Lee

In healthcare, making the best possible predictions with complex models (e. g., neural networks, ensembles/stacks of different models) can impact patient welfare.

Deep unsupervised feature selection

no code implementations25 Sep 2019 Ian Covert, Uygar Sumbul, Su-In Lee

Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task.

feature selection

Improving performance of deep learning models with axiomatic attribution priors and expected gradients

3 code implementations ICLR 2020 Gabriel Erion, Joseph D. Janizek, Pascal Sturmfels, Scott Lundberg, Su-In Lee

Recent research has demonstrated that feature attribution methods for deep networks can themselves be incorporated into training; these attribution priors optimize for a model whose attributions have certain desirable properties -- most frequently, that particular features are important or unimportant.

Interpretable Machine Learning

Physiological Signal Embeddings (PHASE) via Interpretable Stacked Models

no code implementations ICLR 2019 Hugh Chen, Scott Lundberg, Gabe Erion, Su-In Lee

Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the "stacked" models (i. e., feature embedding model followed by prediction model).

Network Embedding

Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data

no code implementations23 Jan 2018 Hugh Chen, Scott Lundberg, Su-In Lee

In this paper, we present feature learning via long short term memory (LSTM) networks and prediction via gradient boosting trees (XGB).

Representation Learning Time Series +1

Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning

no code implementations2 Dec 2017 Gabriel Erion, Hugh Chen, Scott M. Lundberg, Su-In Lee

We also provide a simple way to visualize the reason why a patient's risk is low or high by assigning weight to the patient's past blood oxygen values.

Checkpoint Ensembles: Ensemble Methods from a Single Training Process

1 code implementation9 Oct 2017 Hugh Chen, Scott Lundberg, Su-In Lee

We present the checkpoint ensembles method that can learn ensemble models on a single training process.

Consistent feature attribution for tree ensembles

1 code implementation19 Jun 2017 Scott M. Lundberg, Su-In Lee

Note that a newer expanded version of this paper is now available at: arXiv:1802. 03888 It is critical in many applications to understand what features are important for a model, and why individual predictions were made.

Clustering Feature Importance

An unexpected unity among methods for interpreting model predictions

no code implementations22 Nov 2016 Scott Lundberg, Su-In Lee

Here, we present how a model-agnostic additive representation of the importance of input features unifies current methods.

Unity

Learning Graphical Models With Hubs

no code implementations28 Feb 2014 Kean Ming Tan, Palma London, Karthik Mohan, Su-In Lee, Maryam Fazel, Daniela Witten

We consider the problem of learning a high-dimensional graphical model in which certain hub nodes are highly-connected to many other nodes.

Node-Based Learning of Multiple Gaussian Graphical Models

no code implementations21 Mar 2013 Karthik Mohan, Palma London, Maryam Fazel, Daniela Witten, Su-In Lee

We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks.

Structured Learning of Gaussian Graphical Models

no code implementations NeurIPS 2012 Karthik Mohan, Mike Chung, Seungyeop Han, Daniela Witten, Su-In Lee, Maryam Fazel

We consider estimation of multiple high-dimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions.

Cannot find the paper you are looking for? You can Submit a new open access paper.