Browse > Natural Language Processing > CCG Supertagging

CCG Supertagging

4 papers with code ยท Natural Language Processing

Combinatory Categorical Grammar (CCG; Steedman, 2000) is a highly lexicalized formalism. The standard parsing model of Clark and Curran (2007) uses over 400 lexical categories (or supertags), compared to about 50 part-of-speech tags for typical parsers.

Example:

Vinken , 61 years old
N , N/N N (S[adj]\ NP)\ NP

Leaderboards

Latest papers without code

An Empirical Investigation of Global and Local Normalization for Recurrent Neural Sequence Models Using a Continuous Relaxation to Beam Search

NAACL 2019

Globally normalized neural sequence models are considered superior to their locally normalized equivalents because they may ameliorate the effects of label bias.

CCG SUPERTAGGING MACHINE TRANSLATION

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

SEMEVAL 2019

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG SUPERTAGGING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE

An Empirical Investigation of Global and Local Normalization for Recurrent Neural Sequence Models Using a Continuous Relaxation to Beam Search

NAACL 2019

Globally normalized neural sequence models are considered superior to their locally normalized equivalents because they may ameliorate the effects of label bias.

CCG SUPERTAGGING MACHINE TRANSLATION

Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis

WS 2018

Recently, researchers have found that deep LSTMs trained on tasks like machine translation learn substantial syntactic and semantic information about their input sentences, including part-of-speech.

CCG SUPERTAGGING LANGUAGE MODELLING MACHINE TRANSLATION PART-OF-SPEECH TAGGING TRANSFER LEARNING

Targeted Syntactic Evaluation of Language Models

EMNLP 2018

We automatically construct a large number of minimally different pairs of English sentences, each consisting of a grammatical and an ungrammatical sentence.

CCG SUPERTAGGING LANGUAGE MODELLING

A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models

1 Aug 2017

In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.

CCG SUPERTAGGING MOTION SEGMENTATION NAMED ENTITY RECOGNITION

Exploring the Syntactic Abilities of RNNs with Multi-task Learning

CONLL 2017

Recent work has explored the syntactic abilities of RNNs using the subject-verb agreement task, which diagnoses sensitivity to sentence structure.

CCG SUPERTAGGING LANGUAGE MODELLING MULTI-TASK LEARNING