Search Results for author: Belinda Z. Li

Found 18 papers, 10 papers with code

Language Modeling with Editable External Knowledge

1 code implementation17 Jun 2024 Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas

This paper introduces ERASE, which instead improves model behavior when new documents are acquired, by incrementally deleting or rewriting other entries in the knowledge base each time a document is added.

Language Modelling Retrieval

Bayesian Preference Elicitation with Language Models

no code implementations8 Mar 2024 Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li

We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions.

Experimental Design

Eliciting Human Preferences with Language Models

1 code implementation17 Oct 2023 Belinda Z. Li, Alex Tamkin, Noah Goodman, Jacob Andreas

Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.

Toward Interactive Dictation

no code implementations8 Jul 2023 Belinda Z. Li, Jason Eisner, Adam Pauls, Sam Thomson

Voice dictation is an increasingly important text input modality.

LaMPP: Language Models as Probabilistic Priors for Perception and Action

1 code implementation3 Feb 2023 Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas

Language models trained on large text corpora encode rich distributional information about real-world environments and action sequences.

Activity Recognition Decision Making +2

Language Modeling with Latent Situations

no code implementations20 Dec 2022 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.

Language Modelling

Quantifying Adaptability in Pre-trained Language Models with 500 Tasks

2 code implementations NAACL 2022 Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas

When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model?

Language Modelling Logical Reasoning +2

Implicit Representations of Meaning in Neural Language Models

1 code implementation ACL 2021 Belinda Z. Li, Maxwell Nye, Jacob Andreas

Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?

Text Generation

On Unifying Misinformation Detection

1 code implementation NAACL 2021 Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau Yih, Madian Khabsa

In this paper, we introduce UnifiedM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup.

Few-Shot Learning Misinformation

Efficient One-Pass End-to-End Entity Linking for Questions

3 code implementations EMNLP 2020 Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, Wen-tau Yih

We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass.

Entity Linking Question Answering

Linformer: Self-Attention with Linear Complexity

4 code implementations8 Jun 2020 Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma

Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications.

Language Modelling

Language Models as Fact Checkers?

no code implementations WS 2020 Nayeon Lee, Belinda Z. Li, Sinong Wang, Wen-tau Yih, Hao Ma, Madian Khabsa

Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data.

Common Sense Reasoning Language Modelling +2

Active Learning for Coreference Resolution using Discrete Annotation

1 code implementation ACL 2020 Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer

We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.

Active Learning Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.