Search Results for author: Ian Covert

Found 14 papers, 11 papers with code

Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution

1 code implementation29 Jan 2024 Ian Covert, Chanwoo Kim, Su-In Lee, James Zou, Tatsunori Hashimoto

Many tasks in explainable machine learning, such as data valuation and feature attribution, perform expensive computation for each data point and can be intractable for large datasets.

Data Valuation

Estimating Conditional Mutual Information for Dynamic Feature Selection

1 code implementation5 Jun 2023 Soham Gadgil, Ian Covert, Su-In Lee

Dynamic feature selection, where we sequentially query features to make accurate predictions with a minimal budget, is a promising paradigm to reduce feature acquisition costs and provide transparency into a model's predictions.

feature selection

Learning to Maximize Mutual Information for Dynamic Feature Selection

1 code implementation2 Jan 2023 Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, Su-In Lee

Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets.

feature selection Reinforcement Learning (RL)

What does a platypus look like? Generating customized prompts for zero-shot image classification

2 code implementations ICCV 2023 Sarah Pratt, Ian Covert, Rosanne Liu, Ali Farhadi

Unlike traditional classification models, open-vocabulary models classify among any arbitrary set of categories specified with natural language during inference.

Descriptive Image Classification +1

Learning to Estimate Shapley Values with Vision Transformers

2 code implementations10 Jun 2022 Ian Covert, Chanwoo Kim, Su-In Lee

Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem.

FastSHAP: Real-Time Shapley Value Estimation

4 code implementations ICLR 2022 Neil Jethani, Mukund Sudarshan, Ian Covert, Su-In Lee, Rajesh Ranganath

Shapley values are widely used to explain black-box models, but they are costly to calculate because they require many model evaluations.

Disrupting Model Training with Adversarial Shortcuts

no code implementations ICML Workshop AML 2021 Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno

When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes.

BIG-bench Machine Learning Image Classification

Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression

3 code implementations2 Dec 2020 Ian Covert, Su-In Lee

The Shapley value concept from cooperative game theory has become a popular technique for interpreting ML models, but efficiently estimating these values remains challenging, particularly in the model-agnostic setting.

regression

Explaining by Removing: A Unified Framework for Model Explanation

3 code implementations21 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

We describe a new unified class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence.

counterfactual Counterfactual Reasoning

Feature Removal Is a Unifying Principle for Model Explanation Methods

1 code implementation6 Nov 2020 Ian Covert, Scott Lundberg, Su-In Lee

Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another.

Understanding Global Feature Contributions With Additive Importance Measures

3 code implementations NeurIPS 2020 Ian Covert, Scott Lundberg, Su-In Lee

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.

Feature Importance

Deep unsupervised feature selection

no code implementations25 Sep 2019 Ian Covert, Uygar Sumbul, Su-In Lee

Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task.

feature selection

Temporal Graph Convolutional Networks for Automatic Seizure Detection

no code implementations3 May 2019 Ian Covert, Balu Krishnan, Imad Najm, Jiening Zhan, Matthew Shore, John Hixson, Ming Jack Po

Commonly used deep learning models for time series don't offer a way to leverage structural information, but this would be desirable in a model for structural time series.

Inductive Bias Seizure Detection +2

Neural Granger Causality

3 code implementations16 Feb 2018 Alex Tank, Ian Covert, Nicholas Foti, Ali Shojaie, Emily Fox

We show that our neural Granger causality methods outperform state-of-the-art nonlinear Granger causality methods on the DREAM3 challenge data.

Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.