Search Results for author: Scott Lundberg

Found 19 papers, 10 papers with code

Adaptive Testing and Debugging of NLP Models

no code implementations ACL 2022 Marco Tulio Ribeiro, Scott Lundberg

Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs.

Sparks of Artificial General Intelligence: Early experiments with GPT-4

2 code implementations22 Mar 2023 Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang

We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models.

Arithmetic Reasoning Math Word Problem Solving

ART: Automatic multi-step reasoning and tool-use for large language models

2 code implementations16 Mar 2023 Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro

We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.

Adaptive Testing of Computer Vision Models

1 code implementation ICCV 2023 Irena Gao, Gabriel Ilharco, Scott Lundberg, Marco Tulio Ribeiro

Vision models often fail systematically on groups of data that share common semantic characteristics (e. g., rare objects or unusual scenes), but identifying these failure modes is a challenge.

Image Captioning object-detection +2

Fixing Model Bugs with Natural Language Patches

1 code implementation7 Nov 2022 Shikhar Murty, Christopher D. Manning, Scott Lundberg, Marco Tulio Ribeiro

Current approaches for fixing systematic problems in NLP models (e. g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts.

Relation Extraction Sentiment Analysis

Explaining by Removing: A Unified Framework for Model Explanation

3 code implementations21 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

We describe a new unified class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence.

counterfactual Counterfactual Reasoning

Feature Removal Is a Unifying Principle for Model Explanation Methods

1 code implementation6 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another.

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

1 code implementation27 Oct 2020 Jiaxuan Wang, Jenna Wiens, Scott Lundberg

A causal graph, which encodes the relationships among input variables, can aid in assigning feature importance.

Feature Importance

True to the Model or True to the Data?

no code implementations29 Jun 2020 Hugh Chen, Joseph D. Janizek, Scott Lundberg, Su-In Lee

Furthermore, we argue that the choice comes down to whether it is desirable to be true to the model or true to the data.

BIG-bench Machine Learning

Understanding Global Feature Contributions With Additive Importance Measures

3 code implementations NeurIPS 2020 Ian Covert, Scott Lundberg, Su-In Lee

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.

Feature Importance

Forecasting adverse surgical events using self-supervised transfer learning for physiological signals

no code implementations12 Feb 2020 Hugh Chen, Scott Lundberg, Gabe Erion, Jerry H. Kim, Su-In Lee

Here, we present a transferable embedding method (i. e., a method to transform time series signals into input features for predictive machine learning models) named PHASE (PHysiologicAl Signal Embeddings) that enables us to more accurately forecast adverse surgical outcomes based on physiological signals.

Time Series Time Series Analysis +1

Explaining Models by Propagating Shapley Values of Local Components

no code implementations27 Nov 2019 Hugh Chen, Scott Lundberg, Su-In Lee

In healthcare, making the best possible predictions with complex models (e. g., neural networks, ensembles/stacks of different models) can impact patient welfare.

Improving performance of deep learning models with axiomatic attribution priors and expected gradients

3 code implementations ICLR 2020 Gabriel Erion, Joseph D. Janizek, Pascal Sturmfels, Scott Lundberg, Su-In Lee

Recent research has demonstrated that feature attribution methods for deep networks can themselves be incorporated into training; these attribution priors optimize for a model whose attributions have certain desirable properties -- most frequently, that particular features are important or unimportant.

Interpretable Machine Learning

Physiological Signal Embeddings (PHASE) via Interpretable Stacked Models

no code implementations ICLR 2019 Hugh Chen, Scott Lundberg, Gabe Erion, Su-In Lee

Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the "stacked" models (i. e., feature embedding model followed by prediction model).

Network Embedding

Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data

no code implementations23 Jan 2018 Hugh Chen, Scott Lundberg, Su-In Lee

In this paper, we present feature learning via long short term memory (LSTM) networks and prediction via gradient boosting trees (XGB).

Representation Learning Time Series +1

Checkpoint Ensembles: Ensemble Methods from a Single Training Process

no code implementations9 Oct 2017 Hugh Chen, Scott Lundberg, Su-In Lee

We present the checkpoint ensembles method that can learn ensemble models on a single training process.

A Unified Approach to Interpreting Model Predictions

17 code implementations NeurIPS 2017 Scott Lundberg, Su-In Lee

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

Feature Importance Interpretable Machine Learning

An unexpected unity among methods for interpreting model predictions

no code implementations22 Nov 2016 Scott Lundberg, Su-In Lee

Here, we present how a model-agnostic additive representation of the importance of input features unifies current methods.

Unity

Cannot find the paper you are looking for? You can Submit a new open access paper.