1 code implementation • 1 Jun 2023 • Elias Stengel-Eskin, Kyle Rawlins, Benjamin Van Durme
We attempt to address this shortcoming by introducing AmP, a framework, dataset, and challenge for translating ambiguous natural language to formal representations like logic and code.
no code implementations • EACL 2021 • Patrick Xia, Guanghui Qin, Siddharth Vashishtha, Yunmo Chen, Tongfei Chen, Chandler May, Craig Harman, Kyle Rawlins, Aaron Steven White, Benjamin Van Durme
We present LOME, a system for performing multilingual information extraction.
no code implementations • 8 Apr 2020 • Aaron Steven White, Kyle Rawlins
We investigate the relationship between the frequency with which verbs are found in particular subcategorization frames and the acceptability of those verbs in those frames, focusing in particular on subordinate clause-taking verbs, such as "think", "want", and "tell".
no code implementations • ACL 2020 • Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, Benjamin Van Durme
We present a novel document-level model for finding argument spans that fill an event's roles, connecting related ideas in sentence-level semantic role labeling and coreference resolution.
1 code implementation • LREC 2020 • Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We present the Universal Decompositional Semantics (UDS) dataset (v1. 0), which is bundled with the Decomp toolkit (v0. 1).
no code implementations • 20 Sep 2018 • Najoung Kim, Kyle Rawlins, Benjamin Van Durme, Paul Smolensky
Distinguishing between arguments and adjuncts of a verb is a longstanding, nontrivial problem.
no code implementations • EMNLP 2018 • Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.
no code implementations • EACL 2017 • Aaron Steven White, Kyle Rawlins, Benjamin Van Durme
We propose the semantic proto-role linking model, which jointly induces both predicate-specific semantic roles and predicate-general semantic proto-roles based on semantic proto-role property likelihood judgments.
no code implementations • 8 Oct 2016 • Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.
no code implementations • TACL 2015 • Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme
We present the first large-scale, corpus based verification of Dowty{'}s seminal theory of proto-roles.