Search Results for author: Jakob Prange

Found 13 papers, 7 papers with code

Subcategorizing Adverbials in Universal Conceptual Cognitive Annotation

no code implementations EMNLP (LAW, DMR) 2021 Zhuxin Wang, Jakob Prange, Nathan Schneider

Universal Conceptual Cognitive Annotation (UCCA) is a semantic annotation scheme that organizes texts into coarse predicate-argument structure, offering broad coverage of semantic phenomena.

Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling

1 code implementation15 Dec 2021 Jakob Prange, Nathan Schneider, Lingpeng Kong

We examine the extent to which, in principle, linguistic graph representations can complement and improve neural language modeling.

Language Modelling

UCCA's Foundational Layer: Annotation Guidelines v2.1

1 code implementation31 Dec 2020 Omri Abend, Nathan Schneider, Dotan Dvir, Jakob Prange, Ari Rappoport

This is the annotation manual for Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013), specifically the Foundational Layer.

Supertagging the Long Tail with Tree-Structured Decoding of Complex Categories

1 code implementation2 Dec 2020 Jakob Prange, Nathan Schneider, Vivek Srikumar

Our best tagger is capable of recovering a sizeable fraction of the long-tail supertags and even generates CCG categories that have never been seen in training, while approximating the prior state of the art in overall tag accuracy with fewer parameters.

Structured Prediction TAG

Cross-lingual Semantic Representation for NLP with UCCA

no code implementations COLING 2020 Omri Abend, Dotan Dvir, Daniel Hershcovich, Jakob Prange, Nathan Schneider

This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew.

Natural Language Processing UCCA Parsing

Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics

2 code implementations COLING 2020 Daniel Hershcovich, Nathan Schneider, Dotan Dvir, Jakob Prange, Miryam de Lhoneux, Omri Abend

Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other.

Natural Language Understanding

Made for Each Other: Broad-coverage Semantic Structures Meet Preposition Supersenses

1 code implementation CONLL 2019 Jakob Prange, Nathan Schneider, Omri Abend

Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) is a typologically-informed, broad-coverage semantic annotation scheme that describes coarse-grained predicate-argument structure but currently lacks semantic roles.

Semantically Constrained Multilayer Annotation: The Case of Coreference

no code implementations WS 2019 Jakob Prange, Nathan Schneider, Omri Abend

We propose a coreference annotation scheme as a layer on top of the Universal Conceptual Cognitive Annotation foundational layer, treating units in predicate-argument structure as a basis for entity and event mentions.

Comprehensive Supersense Disambiguation of English Prepositions and Possessives

1 code implementation ACL 2018 Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, Omri Abend

Semantic relations are often signaled with prepositional or possessive marking--but extreme polysemy bedevils their analysis and automatic interpretation.

Adposition and Case Supersenses v2.6: Guidelines for English

3 code implementations7 Apr 2017 Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Sarah R. Moeller, Omri Abend, Adi Shalev, Austin Blodgett, Jakob Prange

This document offers a detailed linguistic description of SNACS (Semantic Network of Adposition and Case Supersenses; Schneider et al., 2018), an inventory of 52 semantic labels ("supersenses") that characterize the use of adpositions and case markers at a somewhat coarse level of granularity, as demonstrated in the STREUSLE corpus (https://github. com/nert-nlp/streusle/; version 4. 5 tracks guidelines version 2. 6).

Cannot find the paper you are looking for? You can Submit a new open access paper.