Search Results for author: Alexis Ross

Found 14 papers, 7 papers with code

Language Modeling with Editable External Knowledge

1 code implementation17 Jun 2024 Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas

This paper introduces ERASE, which instead improves model behavior when new documents are acquired, by incrementally deleting or rewriting other entries in the knowledge base each time a document is added.

Language Modeling Language Modelling +1

Learning Phonotactics from Linguistic Informants

no code implementations8 May 2024 Canaan Breiss, Alexis Ross, Amani Maina-Kilaas, Roger Levy, Jacob Andreas

We propose an interactive approach to language learning that utilizes linguistic acceptability judgments from an informant (a competent language user) to learn a grammar.

Linguistic Acceptability

Toward In-Context Teaching: Adapting Examples to Students' Misconceptions

no code implementations7 May 2024 Alexis Ross, Jacob Andreas

AdapT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods.

Misconceptions

ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

1 code implementation21 Jun 2023 Mike D'Arcy, Alexis Ross, Erin Bransom, Bailey Kuehl, Jonathan Bragg, Tom Hope, Doug Downey

We introduce the task of automatically revising scientific papers based on peer feedback and release ARIES, a dataset of review comments and their corresponding paper edits.

CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

1 code implementation26 May 2023 Marcos Treviso, Alexis Ross, Nuno M. Guerreiro, André F. T. Martins

Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models.

counterfactual Data Augmentation +2

Does Self-Rationalization Improve Robustness to Spurious Correlations?

no code implementations24 Oct 2022 Alexis Ross, Matthew E. Peters, Ana Marasović

Specifically, we evaluate how training self-rationalization models with free-text rationales affects robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes.

Decoder

Tailor: Generating and Perturbing Text with Semantic Controls

1 code implementation ACL 2022 Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner

We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes.

Data Augmentation Diversity +2

Competency Problems: On Finding and Removing Artifacts in Language Data

no code implementations EMNLP 2021 Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith

In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems.

Negation

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

1 code implementation Findings (ACL) 2021 Alexis Ross, Ana Marasović, Matthew E. Peters

Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case).

counterfactual Multiple-choice +4

Learning Models for Actionable Recourse

1 code implementation NeurIPS 2021 Alexis Ross, Himabindu Lakkaraju, Osbert Bastani

As machine learning models are increasingly deployed in high-stakes domains such as legal and financial decision-making, there has been growing interest in post-hoc methods for generating counterfactual explanations.

counterfactual Decision Making

How well do NLI models capture verb veridicality?

no code implementations IJCNLP 2019 Alexis Ross, Ellie Pavlick

In natural language inference (NLI), contexts are considered veridical if they allow us to infer that their underlying propositions make true claims about the real world.

Natural Language Inference Negation +1

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modeling +4

Cannot find the paper you are looking for? You can Submit a new open access paper.