Search Results for author: Leon Bergen

Found 11 papers, 6 papers with code

Predicting Reference: What do Language Models Learn about Discourse Models?

no code implementations EMNLP 2020 Shiva Upadhye, Leon Bergen, Andrew Kehler

Whereas there is a growing literature that probes neural language models to assess the degree to which they have latently acquired grammatical knowledge, little if any research has investigated their acquisition of discourse modeling ability.

IR2: Information Regularization for Information Retrieval

1 code implementation25 Feb 2024 Jianyou Wang, Kaicheng Wang, Xiaoyue Wang, Weili Cao, Ramamohan Paturi, Leon Bergen

This approach, representing a novel application of regularization techniques in synthetic data creation for IR, is tested on three recent IR tasks characterized by complex queries: DORIS-MAE, ArguAna, and WhatsThatBook.

Information Retrieval Retrieval +1

DORIS-MAE: Scientific Document Retrieval using Multi-level Aspect-based Queries

1 code implementation7 Oct 2023 Jianyou Wang, Kaicheng Wang, Xiaoyue Wang, Prudhviraj Naidu, Leon Bergen, Ramamohan Paturi

In scientific research, the ability to effectively retrieve relevant documents based on complex, multifaceted queries is critical.

Retrieval

Systematic Generalization with Edge Transformers

1 code implementation NeurIPS 2021 Leon Bergen, Timothy J. O'Donnell, Dzmitry Bahdanau

Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks.

Dependency Parsing Natural Language Understanding +3

Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention

no code implementations14 Apr 2021 Leon Bergen, Dzmitry Bahdanau, Timothy J. O'Donnell

We present a model that jointly learns the denotations of words together with their groundings using a truth-conditional semantics.

Question Answering Visual Question Answering

Word Frequency Does Not Predict Grammatical Knowledge in Language Models

1 code implementation EMNLP 2020 Charles Yu, Ryan Sie, Nico Tedeschi, Leon Bergen

Neural language models learn, to varying degrees of accuracy, the grammatical properties of natural languages.

Speakers enhance contextually confusable words

no code implementations ACL 2020 Eric Meinhardt, Eric Bakovic, Leon Bergen

Recent work has found evidence that natural languages are shaped by pressures for efficient communication {---} e. g. the more contextually predictable a word is, the fewer speech sounds or syllables it has (Piantadosi et al. 2011).

Cannot find the paper you are looking for? You can Submit a new open access paper.