Search Results for author: Gabriel Ilharco

Found 26 papers, 18 papers with code

TaskWeb: Selecting Better Source Tasks for Multi-task NLP

1 code implementation22 May 2023 Joongwon Kim, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi

TaskShop uses TaskWeb to estimate the benefit of using a source task for learning a new target task, and to choose a subset of helpful training tasks for multi-task training.

Multi-Task Learning

Reproducible scaling laws for contrastive language-image learning

3 code implementations CVPR 2023 Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, Jenia Jitsev

To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository.

Ranked #6 on Open Vocabulary Attribute Detection on OVAD-Box benchmark (using extra training data)

Image Classification Open Vocabulary Attribute Detection +3

Editing Models with Task Arithmetic

3 code implementations8 Dec 2022 Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi

Changing how pre-trained models behave -- e. g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems.

Negation

Adaptive Testing of Computer Vision Models

1 code implementation ICCV 2023 Irena Gao, Gabriel Ilharco, Scott Lundberg, Marco Tulio Ribeiro

Vision models often fail systematically on groups of data that share common semantic characteristics (e. g., rare objects or unusual scenes), but identifying these failure modes is a challenge.

Image Captioning object-detection +2

Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP

1 code implementation10 Aug 2022 Thao Nguyen, Gabriel Ilharco, Mitchell Wortsman, Sewoong Oh, Ludwig Schmidt

Web-crawled datasets have enabled remarkable generalization capabilities in recent image-text models such as CLIP (Contrastive Language-Image pre-training) or Flamingo, but little is known about the dataset creation processes.

Patching open-vocabulary models by interpolating weights

1 code implementation10 Aug 2022 Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, Ludwig Schmidt

We study model patching, where the goal is to improve accuracy on specific tasks without degrading accuracy on tasks where performance is already adequate.

Image Classification

Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)

2 code implementations3 May 2022 Alex Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal Shankar, Achal Dave, Ludwig Schmidt

Contrastively trained language-image models such as CLIP, ALIGN, and BASIC have demonstrated unprecedented robustness to multiple challenging natural distribution shifts.

Ranked #94 on Image Classification on ObjectNet (using extra training data)

Image Classification

CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation

1 code implementation CVPR 2023 Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, Ludwig Schmidt, Shuran Song

To better evaluate L-ZSON, we introduce the Pasture benchmark, which considers finding uncommon objects, objects described by spatial and appearance attributes, and hidden objects described relative to visible objects.

Image Classification Object Localization +1

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

5 code implementations10 Mar 2022 Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, Ludwig Schmidt

The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder.

 Ranked #1 on Image Classification on ImageNet V2 (using extra training data)

Domain Generalization Image Classification +2

Robust fine-tuning of zero-shot models

3 code implementations CVPR 2022 Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, Ludwig Schmidt

Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while preserving high accuracy on the target distribution.

Ranked #12 on Image Classification on ObjectNet (using extra training data)

Image Classification Transfer Learning

Finetuning Pretrained Transformers into RNNs

1 code implementation EMNLP 2021 Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith

Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.

Language Modelling Machine Translation +1

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping

4 code implementations15 Feb 2020 Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, Noah Smith

We publicly release all of our experimental data, including training and validation scores for 2, 100 trials, to encourage further analysis of training dynamics during fine-tuning.

General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping

1 code implementation11 Jul 2019 Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, Jason Baldridge

We address fundamental flaws in previously used metrics and show how Dynamic Time Warping (DTW), a long known method of measuring similarity between two time series, can be used for evaluation of navigation agents.

Dynamic Time Warping Navigate +2

Cannot find the paper you are looking for? You can Submit a new open access paper.