1 code implementation • 14 Oct 2024 • Ian Covert, Tony Sun, James Zou, Tatsunori Hashimoto
We propose a new efficient post-training stage for ViTs called locality alignment and a novel fine-tuning procedure called MaskEmbed that uses a masked reconstruction loss to learn semantic contributions for each image patch.
1 code implementation • 3 Jun 2024 • Rahul Thapa, Kezhen Chen, Ian Covert, Rahul Chalamala, Ben Athiwaratkun, Shuaiwen Leon Song, James Zou
Recent advances in vision-language models (VLMs) have demonstrated the advantages of processing images at higher resolutions and utilizing multi-crop features to preserve native resolution details.
Ranked #146 on Visual Question Answering on MM-Vet
1 code implementation • 30 May 2024 • Ian Covert, Wenlong Ji, Tatsunori Hashimoto, James Zou
We introduce a new perspective by investigating scaling behavior for the value of individual data points: we find that a data point's contribution to model's performance shrinks predictably with the size of the dataset in a log-linear manner.
3 code implementations • 29 Jan 2024 • Ian Covert, Chanwoo Kim, Su-In Lee, James Zou, Tatsunori Hashimoto
Many tasks in explainable machine learning, such as data valuation and feature attribution, perform expensive computation for each data point and are intractable for large datasets.
1 code implementation • 5 Jun 2023 • Soham Gadgil, Ian Covert, Su-In Lee
Dynamic feature selection, where we sequentially query features to make accurate predictions with a minimal budget, is a promising paradigm to reduce feature acquisition costs and provide transparency into a model's predictions.
1 code implementation • 2 Jan 2023 • Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, Su-In Lee
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets.
3 code implementations • ICCV 2023 • Sarah Pratt, Ian Covert, Rosanne Liu, Ali Farhadi
Unlike traditional classification models, open-vocabulary models classify among any arbitrary set of categories specified with natural language during inference.
2 code implementations • 10 Jun 2022 • Ian Covert, Chanwoo Kim, Su-In Lee
Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem.
5 code implementations • ICLR 2022 • Neil Jethani, Mukund Sudarshan, Ian Covert, Su-In Lee, Rajesh Ranganath
Shapley values are widely used to explain black-box models, but they are costly to calculate because they require many model evaluations.
no code implementations • ICML Workshop AML 2021 • Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno
When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes.
4 code implementations • 2 Dec 2020 • Ian Covert, Su-In Lee
The Shapley value concept from cooperative game theory has become a popular technique for interpreting ML models, but efficiently estimating these values remains challenging, particularly in the model-agnostic setting.
3 code implementations • 21 Nov 2020 • Ian Covert, Scott Lundberg, Su-In Lee
We describe a new unified class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence.
1 code implementation • 6 Nov 2020 • Ian Covert, Scott Lundberg, Su-In Lee
Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another.
3 code implementations • NeurIPS 2020 • Ian Covert, Scott Lundberg, Su-In Lee
Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.
no code implementations • 25 Sep 2019 • Ian Covert, Uygar Sumbul, Su-In Lee
Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task.
no code implementations • 3 May 2019 • Ian Covert, Balu Krishnan, Imad Najm, Jiening Zhan, Matthew Shore, John Hixson, Ming Jack Po
Commonly used deep learning models for time series don't offer a way to leverage structural information, but this would be desirable in a model for structural time series.
3 code implementations • 16 Feb 2018 • Alex Tank, Ian Covert, Nicholas Foti, Ali Shojaie, Emily Fox
We show that our neural Granger causality methods outperform state-of-the-art nonlinear Granger causality methods on the DREAM3 challenge data.