Search Results for author: Aaron Traylor

Found 7 papers, 2 papers with code

Transferring Representations of Logical Connectives

no code implementations ACL (NALOMA, IWCS) 2021 Aaron Traylor, Ellie Pavlick, Roman Feiman

In modern natural language processing pipelines, it is common practice to “pretrain” a generative language model on a large corpus of text, and then to “finetune” the created representations by continuing to train them on a discriminative textual inference task.

Language Modelling

Transformer Mechanisms Mimic Frontostriatal Gating Operations When Trained on Human Working Memory Tasks

no code implementations13 Feb 2024 Aaron Traylor, Jack Merullo, Michael J. Frank, Ellie Pavlick

Models based on the Transformer neural network architecture have seen success on a wide variety of tasks that appear to require complex "cognitive branching" -- or the ability to maintain pursuit of one goal while accomplishing others.

AND does not mean OR: Using Formal Languages to Study Language Models' Representations

no code implementations ACL 2021 Aaron Traylor, Roman Feiman, Ellie Pavlick

A current open question in natural language processing is to what extent language models, which are trained with access only to the form of language, are able to capture the meaning of language.

Language Modelling Open-Ended Question Answering

Enhancing Review Comprehension with Domain-Specific Commonsense

no code implementations6 Apr 2020 Aaron Traylor, Chen Chen, Behzad Golshan, Xiaolan Wang, Yuliang Li, Yoshihiko Suhara, Jinfeng Li, Cagatay Demiralp, Wang-Chiew Tan

In this paper, we introduce xSense, an effective system for review comprehension using domain-specific commonsense knowledge bases (xSense KBs).

Aspect Extraction Knowledge Distillation +3

Sampo: Unsupervised Knowledge Base Construction for Opinions and Implications

1 code implementation AKBC 2020 Nikita Bhutani, Aaron Traylor, Chen Chen, Xiaolan Wang, Behzad Golshan, Wang-Chiew Tan

Since it can be expensive to obtain training data to learn to extract implications for each new domain of reviews, we propose an unsupervised KBC system, Sampo, Specifically, Sampo is tailored to build KBs for domains where many reviews on the same domain are available.

Seq2Seq Models with Dropout can Learn Generalizable Reduplication

no code implementations WS 2018 Br Prickett, on, Aaron Traylor, Joe Pater

Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999).

Cannot find the paper you are looking for? You can Submit a new open access paper.