Search Results for author: aditi raghunathan

Found 47 papers, 30 papers with code

Overparameterization hurts worst-group accuracy with spurious correlations

no code implementations ICML 2020 Shiori Sagawa, aditi raghunathan, Pang Wei Koh, Percy Liang

Increasing model capacity well beyond the point of zero training error has been observed to improve average test accuracy.

Scaling Laws for Data Filtering -- Data Curation cannot be Compute Agnostic

2 code implementations10 Apr 2024 Sachin Goyal, Pratyush Maini, Zachary C. Lipton, aditi raghunathan, J. Zico Kolter

Vision-language models (VLMs) are trained for thousands of GPU hours on carefully curated web datasets.

Predicting the Performance of Foundation Models via Agreement-on-the-Line

no code implementations2 Apr 2024 Aman Mehra, Rahul Saxena, Taeyoun Kim, Christina Baek, Zico Kolter, aditi raghunathan

Recently, it was shown that ensembles of neural networks observe the phenomena ``agreement-on-the-line'', which can be leveraged to reliably predict OOD performance without labels.

Jailbreaking is Best Solved by Definition

no code implementations20 Mar 2024 Taeyoun Kim, Suhas Kotha, aditi raghunathan

The rise of "jailbreak" attacks on language models has led to a flurry of defenses aimed at preventing the output of undesirable responses.

Repetition Improves Language Model Embeddings

1 code implementation23 Feb 2024 Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, aditi raghunathan

In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input.

Language Modelling

AutoFT: Learning an Objective for Robust Fine-Tuning

no code implementations18 Jan 2024 Caroline Choi, Yoonho Lee, Annie Chen, Allan Zhou, aditi raghunathan, Chelsea Finn

Given a task, AutoFT searches for a fine-tuning procedure that enhances out-of-distribution (OOD) generalization.

Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift

no code implementations NeurIPS 2023 Saurabh Garg, Amrith Setlur, Zachary Chase Lipton, Sivaraman Balakrishnan, Virginia Smith, aditi raghunathan

Self-training and contrastive learning have emerged as leading techniques for incorporating unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it is absent (semi-supervised learning).

Contrastive Learning Unsupervised Domain Adaptation

Multitask Learning Can Improve Worst-Group Outcomes

1 code implementation5 Dec 2023 Atharva Kulkarni, Lucio Dery, Amrith Setlur, aditi raghunathan, Ameet Talwalkar, Graham Neubig

We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself.

Fairness

Reliable Test-Time Adaptation via Agreement-on-the-Line

no code implementations7 Oct 2023 Eungyeup Kim, MingJie Sun, aditi raghunathan, Zico Kolter

In this work, we make a notable and surprising observation that TTAed models strongly show the agreement-on-the-line phenomenon (Baek et al., 2022) across a wide range of distribution shifts.

Test-time Adaptation

Understanding Catastrophic Forgetting in Language Models via Implicit Inference

1 code implementation18 Sep 2023 Suhas Kotha, Jacob Mitchell Springer, aditi raghunathan

We lack a systematic understanding of the effects of fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback), particularly on tasks outside the narrow fine-tuning distribution.

In-Context Learning

Contextual Reliability: When Different Features Matter in Different Contexts

no code implementations19 Jul 2023 Gaurav Ghosal, Amrith Setlur, Daniel S. Brown, Anca D. Dragan, aditi raghunathan

We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context.

T-MARS: Improving Visual Representations by Circumventing Text Feature Learning

1 code implementation6 Jul 2023 Pratyush Maini, Sachin Goyal, Zachary C. Lipton, J. Zico Kolter, aditi raghunathan

However, naively removing all such data could also be wasteful, as it throws away images that contain visual features (in addition to overlapping text).

Optical Character Recognition

ALP: Action-Aware Embodied Learning for Perception

no code implementations16 Jun 2023 Xinran Liang, Anthony Han, Wilson Yan, aditi raghunathan, Pieter Abbeel

In addition, we show that by training on actively collected data more relevant to the environment and task, our method generalizes more robustly to downstream tasks compared to models pre-trained on fixed datasets such as ImageNet.

Benchmarking object-detection +3

Automatically Auditing Large Language Models via Discrete Optimization

1 code implementation8 Mar 2023 Erik Jones, Anca Dragan, aditi raghunathan, Jacob Steinhardt

Auditing large language models for unexpected behaviors is critical to preempt catastrophic deployments, yet remains challenging.

Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts

1 code implementation6 Feb 2023 Amrith Setlur, Don Dennis, Benjamin Eysenbach, aditi raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine

Some robust training algorithms (e. g., Group DRO) specialize to group shifts and require group information on all training points.

Learning Representations that Enable Generalization in Assistive Tasks

no code implementations5 Dec 2022 Jerry Zhi-Yang He, aditi raghunathan, Daniel S. Brown, Zackory Erickson, Anca D. Dragan

We advocate that generalization to such OOD policies benefits from (1) learning a good latent representation for human policies that test-time humans can accurately be mapped to, and (2) making that representation adaptable with test-time interaction data, instead of relying on it to perfectly capture the space of human policies based on the simulated population only.

Finetune like you pretrain: Improved finetuning of zero-shot vision models

1 code implementation CVPR 2023 Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, aditi raghunathan

In total, these benchmarks establish contrastive finetuning as a simple, intuitive, and state-of-the-art approach for supervised finetuning of image-text models like CLIP.

Descriptive Few-Shot Learning +1

Using Language to Extend to Unseen Domains

1 code implementation18 Oct 2022 Lisa Dunlap, Clara Mohri, Devin Guillory, Han Zhang, Trevor Darrell, Joseph E. Gonzalez, aditi raghunathan, Anja Rohrbach

It is expensive to collect training data for every possible domain that a vision model may encounter when deployed.

Domain Adaptation

Test-Time Adaptation via Conjugate Pseudo-labels

1 code implementation20 Jul 2022 Sachin Goyal, MingJie Sun, aditi raghunathan, Zico Kolter

In this paper, we start by presenting a surprising phenomenon: if we attempt to meta-learn the best possible TTA loss over a wide class of functions, then we recover a function that is remarkably similar to (a temperature-scaled version of) the softmax-entropy employed by TENT.

Meta-Learning Test-time Adaptation

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

no code implementations18 Jul 2022 Ananya Kumar, Tengyu Ma, Percy Liang, aditi raghunathan

We often see undesirable tradeoffs in robust machine learning where out-of-distribution (OOD) accuracy is at odds with in-distribution (ID) accuracy: a robust classifier obtained via specialized techniques such as removing spurious features often has better OOD but worse ID accuracy compared to a standard classifier trained via ERM.

Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift

1 code implementation27 Jun 2022 Christina Baek, Yiding Jiang, aditi raghunathan, Zico Kolter

In this paper, we show a similar but surprising phenomenon also holds for the agreement between pairs of neural network classifiers: whenever accuracy-on-the-line holds, we observe that the OOD agreement between the predictions of any two pairs of neural networks (with potentially different architectures) also observes a strong linear correlation with their ID agreement.

Model Selection

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

3 code implementations21 Feb 2022 Ananya Kumar, aditi raghunathan, Robbie Jones, Tengyu Ma, Percy Liang

However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large.

Calibrated ensembles - a simple way to mitigate ID-OOD accuracy tradeoffs

no code implementations29 Sep 2021 Ananya Kumar, aditi raghunathan, Tengyu Ma, Percy Liang

We often see undesirable tradeoffs in robust machine learning where out-of-distribution (OOD) accuracy is at odds with in-distribution (ID) accuracy.

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Just Train Twice: Improving Group Robustness without Training Group Information

1 code implementation19 Jul 2021 Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, aditi raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn

Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label.

Image Classification Out-of-Distribution Generalization

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

2 code implementations NeurIPS 2020 Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, aditi raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration.

Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices

2 code implementations6 Aug 2020 Evan Zheran Liu, aditi raghunathan, Percy Liang, Chelsea Finn

Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task.

Meta Reinforcement Learning reinforcement-learning +2

The Pitfalls of Simplicity Bias in Neural Networks

2 code implementations NeurIPS 2020 Harshay Shah, Kaustav Tamuly, aditi raghunathan, Prateek Jain, Praneeth Netrapalli

Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustness of neural networks---a widely observed phenomenon in practice [Goodfellow et al. 2014, Jo and Bengio 2017].

Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning

no code implementations ICML Workshop LifelongML 2020 Evan Zheran Liu, aditi raghunathan, Percy Liang, Chelsea Finn

In principle, meta-reinforcement learning approaches can exploit this shared structure, but in practice, they fail to adapt to new environments when adaptation requires targeted exploration (e. g., exploring the cabinets to find ingredients in a new kitchen).

Meta Reinforcement Learning reinforcement-learning +2

An Investigation of Why Overparameterization Exacerbates Spurious Correlations

3 code implementations9 May 2020 Shiori Sagawa, aditi raghunathan, Pang Wei Koh, Percy Liang

We study why overparameterization -- increasing model size well beyond the point of zero training error -- can hurt test error on minority groups despite improving average test error when there are spurious correlations in the data.

Inductive Bias

DROCC: Deep Robust One-Class Classification

1 code implementation ICML 2020 Sachin Goyal, aditi raghunathan, Moksh Jain, Harsha Vardhan Simhadri, Prateek Jain

Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images.

Classification Feature Engineering +3

Understanding and Mitigating the Tradeoff Between Robustness and Accuracy

1 code implementation ICML 2020 Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang

In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error.

regression

Adversarial Training Can Hurt Generalization

no code implementations ICML Workshop Deep_Phenomen 2019 Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, Percy Liang

While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary).

Maximum Weighted Loss Discrepancy

1 code implementation8 Jun 2019 Fereshte Khani, aditi raghunathan, Percy Liang

To capture this inequality, we introduce and study a notion we call maximum weighted loss discrepancy (MWLD), the maximum (weighted) difference between the loss of a group and the loss of the population.

Fairness Generalization Bounds

Unlabeled Data Improves Adversarial Robustness

4 code implementations NeurIPS 2019 Yair Carmon, aditi raghunathan, Ludwig Schmidt, Percy Liang, John C. Duchi

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning.

Adversarial Robustness Robust classification

Semidefinite relaxations for certifying robustness to adversarial examples

3 code implementations NeurIPS 2018 Aditi Raghunathan, Jacob Steinhardt, Percy Liang

One promise of ending the arms race is developing certified defenses, ones which are provably robust against all attackers in some family.

Certified Defenses against Adversarial Examples

4 code implementations ICLR 2018 Aditi Raghunathan, Jacob Steinhardt, Percy Liang

While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.

Adversarial Attack Adversarial Defense +1

Estimating the unseen from multiple populations

2 code implementations ICML 2017 Aditi Raghunathan, Greg Valiant, James Zou

We generalize this extrapolation and related unseen estimation problems to the multiple population setting, where population $j$ has an unknown distribution $D_j$ from which we observe $n_j$ samples.

Learning Mixture of Gaussians with Streaming Data

no code implementations NeurIPS 2017 Aditi Raghunathan, Ravishankar Krishnaswamy, Prateek Jain

However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.

Clustering

Estimation from Indirect Supervision with Linear Moments

1 code implementation10 Aug 2016 Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang

In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation.

Structured Prediction

Probabilistic Dependency Networks for Prediction and Diagnostics

no code implementations13 Aug 2015 Narayanan U. Edakunni, aditi raghunathan, Abhishek Tripathi, John Handley, Fredric Roulland

Research in transportation frequently involve modelling and predicting attributes of events that occur at regular intervals.

Cannot find the paper you are looking for? You can Submit a new open access paper.