1 code implementation • EMNLP 2021 • Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer
Large language models have shown promising results in zero-shot settings.
3 code implementations • 23 May 2023 • Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer
Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99. 3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.
no code implementations • 20 Dec 2022 • Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior.
2 code implementations • 27 Oct 2022 • Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis
We propose contrastive decoding (CD), a more reliable search objective that returns the difference between likelihood under a large LM (called the expert, e. g. OPT-13b) and a small LM (called the amateur, e. g. OPT-125m).
no code implementations • 26 Aug 2022 • Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, Samuel R. Bowman
We present the results of the NLP Community Metasurvey.
1 code implementation • 25 Feb 2022 • Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
2 code implementations • NAACL 2022 • Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer
We introduce a new domain expert mixture (DEMix) layer that enables conditioning a language model (LM) on the domain of the input text.
no code implementations • ACL 2021 • Rowan Zellers, Ari Holtzman, Matthew Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, Yejin Choi
We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language.
2 code implementations • EMNLP 2021 • Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, Yejin Choi
Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans.
Ranked #1 on Hallucination Pair-wise Detection (4-ref) on FOIL
Hallucination Pair-wise Detection (1-ref) Hallucination Pair-wise Detection (4-ref) +3
1 code implementation • 16 Apr 2021 • Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer
Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).
no code implementations • 2 Feb 2021 • Yao Dou, Maxwell Forbes, Ari Holtzman, Yejin Choi
We study conversational dialog in which there are many possible responses to a given history.
no code implementations • ACL 2021 • Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, Yejin Choi
In this paper, we present Reflective Decoding, a novel unsupervised algorithm that allows for direct application of unidirectional LMs to non-sequential tasks.
2 code implementations • EMNLP 2020 • Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph Turian
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
1 code implementation • NAACL 2021 • Rowan Zellers, Ari Holtzman, Elizabeth Clark, Lianhui Qin, Ali Farhadi, Yejin Choi
We propose TuringAdvice, a new challenge task and dataset for language understanding models.
no code implementations • IJCNLP 2019 • Peter West, Ari Holtzman, Jan Buys, Yejin Choi
In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, our approach seeks a compressed sentence that can best predict the next sentence.
1 code implementation • IJCNLP 2019 • Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi
Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes.
2 code implementations • ICLR 2020 • Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, Yejin Choi
Abductive reasoning is inference to the most plausible explanation.
1 code implementation • 8 Aug 2019 • Maxwell Forbes, Ari Holtzman, Yejin Choi
Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language.
no code implementations • EACL 2021 • Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi
We introduce a general framework for abstractive summarization with factual consistency and distinct modeling of the narrative flow in an output summary.
4 code implementations • NeurIPS 2019 • Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi
We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data.
Ranked #2 on Fake News Detection on Grover-Mega
2 code implementations • ACL 2019 • Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi
In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset.
15 code implementations • ICLR 2020 • Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi
Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators.
1 code implementation • CVPR 2019 • Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa
We present the Frontier Aware Search with backTracking (FAST) Navigator, a general framework for action decoding, that achieves state-of-the-art results on the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson et.
Ranked #3 on Vision-Language Navigation on Room2Room
2 code implementations • ACL 2018 • Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, Yejin Choi
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory.
no code implementations • NAACL 2018 • Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A. Smith, Mari Ostendorf
We present Sounding Board, a social chatbot that won the 2017 Amazon Alexa Prize.
no code implementations • ICLR 2018 • Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, Yejin Choi
Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
no code implementations • ICLR 2018 • Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, Yejin Choi
Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.
no code implementations • EMNLP 2017 • Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, Yejin Choi
The framing of an action influences how we perceive its actor.