Search Results for author: Aaron Steven White

Found 23 papers, 8 papers with code

Joint Universal Syntactic and Semantic Parsing

1 code implementation12 Apr 2021 Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, Benjamin Van Durme

While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other.

Semantic Parsing

Decomposing and Recomposing Event Structure

no code implementations18 Mar 2021 William Gantt, Lelia Glass, Aaron Steven White

We present an event structure classification empirically derived from inferential properties annotated on sentence- and document-level Universal Decompositional Semantics (UDS) graphs.

Classification

Gradual Fine-Tuning for Low-Resource Domain Adaptation

1 code implementation EACL (AdaptNLP) 2021 Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.

Domain Adaptation

Natural Language Inference with Mixed Effects

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 William Gantt, Benjamin Kane, Aaron Steven White

There is growing evidence that the prevalence of disagreement in the raw annotations used to construct natural language inference datasets makes the common practice of aggregating those annotations to a single label problematic.

Natural Language Inference

Montague Grammar Induction

no code implementations15 Oct 2020 Gene Louis Kim, Aaron Steven White

We propose a computational modeling framework for inducing combinatory categorial grammars from arbitrary behavioral data.

Frequency, Acceptability, and Selection: A case study of clause-embedding

no code implementations8 Apr 2020 Aaron Steven White, Kyle Rawlins

We investigate the relationship between the frequency with which verbs are found in particular subcategorization frames and the acceptability of those verbs in those frames, focusing in particular on subordinate clause-taking verbs, such as "think", "want", and "tell".

Reading the Manual: Event Extraction as Definition Comprehension

no code implementations EMNLP (spnlp) 2020 Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme

We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.

Event Extraction

Universal Decompositional Semantic Parsing

no code implementations ACL 2020 Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, Benjamin Van Durme

We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.

Semantic Parsing

The lexical and grammatical sources of neg-raising inferences

no code implementations SCiL 2020 Hannah Youngeun An, Aaron Steven White

We investigate neg(ation)-raising inferences, wherein negation on a predicate can be interpreted as though in that predicate's subordinate clause.

A Framework for Decoding Event-Related Potentials from Text

no code implementations WS 2019 Shaorong Yan, Aaron Steven White

We propose a novel framework for modeling event-related potentials (ERPs) collected during reading that couples pre-trained convolutional decoders with a language model.

Language Modelling Word Embeddings

Fine-Grained Temporal Relation Extraction

no code implementations ACL 2019 Siddharth Vashishtha, Benjamin Van Durme, Aaron Steven White

We present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales.

Relation Extraction Transfer Learning

Decomposing Generalization: Models of Generic, Habitual, and Episodic Statements

no code implementations TACL 2019 Venkata Subrahmanyan Govindarajan, Benjamin Van Durme, Aaron Steven White

We present a novel semantic framework for modeling linguistic expressions of generalization---generic, habitual, and episodic statements---as combinations of simple, real-valued referential properties of predicates and their arguments.

Word Embeddings

Lexicosyntactic Inference in Neural Models

no code implementations EMNLP 2018 Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

no code implementations EMNLP (ACL) 2018 Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme

We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.

Natural Language Inference

Neural models of factuality

1 code implementation NAACL 2018 Rachel Rudinger, Aaron Steven White, Benjamin Van Durme

We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.

Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework

no code implementations IJCNLP 2017 Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme

We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE).

General Classification Image Captioning +2

The Semantic Proto-Role Linking Model

no code implementations EACL 2017 Aaron Steven White, Kyle Rawlins, Benjamin Van Durme

We propose the semantic proto-role linking model, which jointly induces both predicate-specific semantic roles and predicate-general semantic proto-roles based on semantic proto-role property likelihood judgments.

Semantic Role Labeling

Computational linking theory

no code implementations8 Oct 2016 Aaron Steven White, Drew Reisinger, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme

A linking theory explains how verbs' semantic arguments are mapped to their syntactic arguments---the inverse of the Semantic Role Labeling task from the shallow semantic parsing literature.

Semantic Parsing Semantic Role Labeling

Cannot find the paper you are looking for? You can Submit a new open access paper.