no code implementations • ACL (NALOMA, IWCS) 2021 • Aaron Traylor, Ellie Pavlick, Roman Feiman
In modern natural language processing pipelines, it is common practice to “pretrain” a generative language model on a large corpus of text, and then to “finetune” the created representations by continuing to train them on a discriminative textual inference task.
no code implementations • 13 Feb 2024 • Aaron Traylor, Jack Merullo, Michael J. Frank, Ellie Pavlick
Models based on the Transformer neural network architecture have seen success on a wide variety of tasks that appear to require complex "cognitive branching" -- or the ability to maintain pursuit of one goal while accomplishing others.
no code implementations • ACL 2021 • Aaron Traylor, Roman Feiman, Ellie Pavlick
A current open question in natural language processing is to what extent language models, which are trained with access only to the form of language, are able to capture the meaning of language.
no code implementations • 6 Apr 2020 • Aaron Traylor, Chen Chen, Behzad Golshan, Xiaolan Wang, Yuliang Li, Yoshihiko Suhara, Jinfeng Li, Cagatay Demiralp, Wang-Chiew Tan
In this paper, we introduce xSense, an effective system for review comprehension using domain-specific commonsense knowledge bases (xSense KBs).
1 code implementation • AKBC 2020 • Nikita Bhutani, Aaron Traylor, Chen Chen, Xiaolan Wang, Behzad Golshan, Wang-Chiew Tan
Since it can be expensive to obtain training data to learn to extract implications for each new domain of reviews, we propose an unsupervised KBC system, Sampo, Specifically, Sampo is tailored to build KBs for domains where many reviews on the same domain are available.
1 code implementation • ACL 2019 • Derek Tam, Nicholas Monath, Ari Kobren, Aaron Traylor, Rajarshi Das, Andrew McCallum
We evaluate STANCE's ability to detect whether two strings can refer to the same entity--a task we term alias detection.
no code implementations • WS 2018 • Br Prickett, on, Aaron Traylor, Joe Pater
Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999).