Search Results for author: Timothy J. O'Donnell

Found 17 papers, 9 papers with code

From Language Models over Tokens to Language Models over Characters

no code implementations4 Dec 2024 Tim Vieira, Ben LeBrun, Mario Giulianelli, Juan Luis Gastaldi, Brian DuSell, John Terilla, Timothy J. O'Donnell, Ryan Cotterell

Modern language models are internally -- and mathematically -- distributions over token strings rather than \emph{character} strings, posing numerous challenges for programmers building user applications on top of them.

Language Modelling

Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models

1 code implementation17 Jun 2024 Eva Portelance, Siva Reddy, Timothy J. O'Donnell

Here, we argue that they are instead both contingent on a more general learning strategy for language acquisition: joint learning.

Language Acquisition Language Modelling +1

Correlation Does Not Imply Compensation: Complexity and Irregularity in the Lexicon

no code implementations7 Jun 2024 Amanda Doucette, Ryan Cotterell, Morgan Sonderegger, Timothy J. O'Donnell

It has been claimed that within a language, morphologically irregular words are more likely to be phonotactically simple and morphologically regular words are more likely to be phonotactically complex.

The Stable Entropy Hypothesis and Entropy-Aware Decoding: An Analysis and Algorithm for Robust Natural Language Generation

no code implementations14 Feb 2023 Kushal Arora, Timothy J. O'Donnell, Doina Precup, Jason Weston, Jackie C. K. Cheung

State-of-the-art language generation models can degenerate when applied to open-ended generation problems such as text completion, story generation, or dialog modeling.

Diversity Story Generation

Evaluating Distributional Distortion in Neural Language Modeling

no code implementations ICLR 2022 Benjamin LeBrun, Alessandro Sordoni, Timothy J. O'Donnell

To address this gap, we develop a controlled evaluation scheme which uses generative models trained on natural data as artificial languages from which we can exactly compute sequence probabilities.

Language Modelling

Systematic Generalization with Edge Transformers

1 code implementation NeurIPS 2021 Leon Bergen, Timothy J. O'Donnell, Dzmitry Bahdanau

Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks.

Dependency Parsing Natural Language Understanding +3

Compositional Generalization in Dependency Parsing

no code implementations ACL 2022 Emily Goodwin, Siva Reddy, Timothy J. O'Donnell, Dzmitry Bahdanau

To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ).

Dependency Parsing Semantic Parsing

Linguistic Dependencies and Statistical Dependence

1 code implementation EMNLP 2021 Jacob Louis Hoover, Alessandro Sordoni, Wenyu Du, Timothy J. O'Donnell

Are pairs of words that tend to occur together also likely to stand in a linguistic dependency?

Characterizing Idioms: Conventionality and Contingency

no code implementations ACL 2022 Michaela Socolof, Jackie Chi Kit Cheung, Michael Wagner, Timothy J. O'Donnell

Second, the non-canonical meanings of words in an idiom are contingent on the presence of other words in the idiom.

Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention

no code implementations14 Apr 2021 Leon Bergen, Dzmitry Bahdanau, Timothy J. O'Donnell

We present a model that jointly learns the denotations of words together with their groundings using a truth-conditional semantics.

Question Answering Visual Question Answering

Probing Linguistic Systematicity

1 code implementation ACL 2020 Emily Goodwin, Koustuv Sinha, Timothy J. O'Donnell

Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity; generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear.

Natural Language Inference Natural Language Understanding

CLOSURE: Assessing Systematic Generalization of CLEVR Models

3 code implementations12 Dec 2019 Dzmitry Bahdanau, Harm de Vries, Timothy J. O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, Aaron Courville

In this work, we study how systematic the generalization of such models is, that is to which extent they are capable of handling novel combinations of known linguistic constructs.

Few-Shot Learning Systematic Generalization +1

A generalized parsing framework for Abstract Grammars

no code implementations31 Oct 2017 Daniel Harasim, Chris Bruno, Eva Portelance, Martin Rohrmeier, Timothy J. O'Donnell

This technical report presents a general framework for parsing a variety of grammar formalisms.

Cannot find the paper you are looking for? You can Submit a new open access paper.