Search Results for author: Nathaniel Weir

Found 16 papers, 7 papers with code

From Models to Microtheories: Distilling a Model's Topical Knowledge for Grounded Question Answering

1 code implementation23 Dec 2024 Nathaniel Weir, Bhavana Dalvi Mishra, Orion Weller, Oyvind Tafjord, Sam Hornstein, Alexander Sabol, Peter Jansen, Benjamin Van Durme, Peter Clark

We show that, when added to a general corpus (e. g., Wikipedia), microtheories can supply critical, topical information not necessarily present in the corpus, improving both a model's ability to ground its answers to verifiable knowledge (i. e., show how answers are systematically entailed by documents in the corpus, fully grounding up to +8% more answers), and the accuracy of those grounded answers (up to +8% absolute).

Question Answering

Core: Robust Factual Precision with Informative Sub-Claim Identification

1 code implementation4 Jul 2024 Zhengping Jiang, Jingyu Zhang, Nathaniel Weir, Seth Ebner, Miriam Wanner, Kate Sanders, Daniel Khashabi, Anqi Liu, Benjamin Van Durme

Hallucinations pose a challenge to the application of large language models (LLMs) thereby motivating the development of metrics to evaluate factual precision.

Informativeness Text Generation

Learning to Reason via Program Generation, Emulation, and Search

1 code implementation25 May 2024 Nathaniel Weir, Muhammad Khalifa, Linlu Qiu, Orion Weller, Peter Clark

CoGEX works by (1) training LMs to generate pseudo-programs, (2) teaching them to emulate their generated program's execution, including those leaf functions, allowing the LM's knowledge to fill in the execution gaps; and (3) using them to search over many programs to find an optimal one.

Code Generation In-Context Learning +1

SELF-[IN]CORRECT: LLMs Struggle with Discriminating Self-Generated Responses

no code implementations4 Apr 2024 Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, Daniel Khashabi

For this to be true, LLMs would need to be better at discriminating among previously-generated alternatives, than generating initial responses.

TV-TREES: Multimodal Entailment Trees for Neuro-Symbolic Video Reasoning

no code implementations29 Feb 2024 Kate Sanders, Nathaniel Weir, Benjamin Van Durme

It is challenging for models to understand complex, multimodal content such as television clips, and this is in part because video-language models often rely on single-modality reasoning and lack interpretability.

Question Answering Video Understanding

Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic

no code implementations22 Feb 2024 Nathaniel Weir, Kate Sanders, Orion Weller, Shreya Sharma, Dongwei Jiang, Zhengping Jiang, Bhavana Dalvi Mishra, Oyvind Tafjord, Peter Jansen, Peter Clark, Benjamin Van Durme

Recent language models enable new opportunities for structured reasoning with text, such as the construction of intuitive, proof-like textual entailment trees without relying on brittle formal logic.

Formal Logic Knowledge Distillation +2

Reframing Tax Law Entailment as Analogical Reasoning

no code implementations12 Jan 2024 Xinrui Zou, Ming Zhang, Nathaniel Weir, Benjamin Van Durme, Nils Holzenberger

We re-frame statutory reasoning as an analogy task, where each instance of the analogy task involves a combination of two instances of statutory reasoning.

Retrieval

"According to ...": Prompting Language Models Improves Quoting from Pre-Training Data

1 code implementation22 May 2023 Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme

Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data.

Defending Against Disinformation Attacks in Open-Domain Question Answering

1 code implementation20 Dec 2022 Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, Benjamin Van Durme

Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the search collection can cause large drops in accuracy for production systems.

Data Poisoning Misinformation +1

NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning

no code implementations16 Sep 2022 Nathaniel Weir, Peter Clark, Benjamin Van Durme

Our goal is a modern approach to answering questions via systematic reasoning where answers are supported by human interpretable proof trees grounded in an NL corpus of authoritative facts.

Hallucination Language Modeling +2

InFillmore: Frame-Guided Language Generation with Bidirectional Context

no code implementations Joint Conference on Lexical and Computational Semantics 2021 Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, Benjamin Van Durme

We propose a structured extension to bidirectional-context conditional language generation, or "infilling," inspired by Frame Semantic theory (Fillmore, 1976).

Text Infilling

COD3S: Diverse Generation with Discrete Semantic Signatures

1 code implementation EMNLP 2020 Nathaniel Weir, João Sedoc, Benjamin Van Durme

We present COD3S, a novel method for generating semantically diverse sentences using neural sequence-to-sequence (seq2seq) models.

Diversity Semantic Textual Similarity +1

Probing Neural Language Models for Human Tacit Assumptions

no code implementations10 Apr 2020 Nathaniel Weir, Adam Poliak, Benjamin Van Durme

Our prompts are based on human responses in a psychological study of conceptual associations.

Diagnostic

Cannot find the paper you are looking for? You can Submit a new open access paper.