no code implementations • 4 Apr 2024 • Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, Daniel Khashabi
Can LLMs continually improve their previous outputs for better results?
no code implementations • 29 Feb 2024 • Kate Sanders, Nathaniel Weir, Benjamin Van Durme
It is challenging to perform question-answering over complex, multimodal content such as television clips.
no code implementations • 22 Feb 2024 • Nathaniel Weir, Kate Sanders, Orion Weller, Shreya Sharma, Dongwei Jiang, Zhengping Jiang, Bhavana Dalvi Mishra, Oyvind Tafjord, Peter Jansen, Peter Clark, Benjamin Van Durme
Contemporary language models enable new opportunities for structured reasoning with text, such as the construction and evaluation of intuitive, proof-like textual entailment trees without relying on brittle formal logic.
no code implementations • 12 Jan 2024 • Xinrui Zou, Ming Zhang, Nathaniel Weir, Benjamin Van Durme, Nils Holzenberger
We re-frame statutory reasoning as an analogy task, where each instance of the analogy task involves a combination of two instances of statutory reasoning.
no code implementations • 22 May 2023 • Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data.
no code implementations • 20 Dec 2022 • Nathaniel Weir, Ryan Thomas, Randolph D'Amore, Kellie Hill, Benjamin Van Durme, Harsh Jhamtani
We introduce a language generation task grounded in a popular video game environment.
no code implementations • 20 Dec 2022 • Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, Benjamin Van Durme
Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the search collection can cause large drops in accuracy for production systems.
no code implementations • 16 Sep 2022 • Nathaniel Weir, Peter Clark, Benjamin Van Durme
Our goal is a modern approach to answering questions via systematic reasoning where answers are supported by human interpretable proof trees grounded in an NL corpus of authoritative facts.
no code implementations • 9 Mar 2022 • Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm van Seijen, Benjamin Van Durme
Humans have the capability, aided by the expressive compositionality of their language, to learn quickly by demonstration.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, Benjamin Van Durme
We propose a structured extension to bidirectional-context conditional language generation, or "infilling," inspired by Frame Semantic theory (Fillmore, 1976).
1 code implementation • EMNLP 2020 • Nathaniel Weir, João Sedoc, Benjamin Van Durme
We present COD3S, a novel method for generating semantically diverse sentences using neural sequence-to-sequence (seq2seq) models.
no code implementations • 10 Apr 2020 • Nathaniel Weir, Adam Poliak, Benjamin Van Durme
Our prompts are based on human responses in a psychological study of conceptual associations.
2 code implementations • 2 Apr 2018 • Prasetya Utama, Nathaniel Weir, Fuat Basik, Carsten Binnig, Ugur Cetintemel, Benjamin Hättasch, Amir Ilkhechi, Shekar Ramaswamy, Arif Usta
The ability to extract insights from new data sets is critical for decision making.