Constituency Parsing

63 papers with code • 3 benchmarks • 6 datasets

Constituency parsing aims to extract a constituency-based parse tree from a sentence that represents its syntactic structure according to a phrase structure grammar.


             Sentence (S)
   |                          |
 Noun (N)                Verb Phrase (VP)
   |                          |
 John                 +-------+--------+
                      |                |
                    Verb (V)         Noun (N)
                      |                |
                    sees              Bill

Recent approaches convert the parse tree into a sequence following a depth-first traversal in order to be able to apply sequence-to-sequence models to it. The linearized version of the above parse tree looks as follows: (S (N) (VP V N)).

Most implemented papers

Attention Is All You Need

tensorflow/tensor2tensor NeurIPS 2017

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks

theamrzaki/text_summurization_abstractive_methods NeurIPS 2015

Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning.

Grammar as a Foreign Language

atpaino/deep-text-corrector NeurIPS 2015

Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades.

Recurrent Neural Network Grammars

clab/rnng NAACL 2016

We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure.

Constituency Parsing with a Self-Attentive Encoder

nikitakit/self-attentive-parser ACL 2018

We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser.

Multilingual Constituency Parsing with Self-Attention and Pre-Training

nikitakit/self-attentive-parser ACL 2019

We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders

iesl/diora 3 Apr 2019

We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Generalizing Natural Language Analysis through Span-relation Representations

jzbjyb/SpanRel ACL 2020

Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.

YellowFin and the Art of Momentum Tuning

JianGoForIt/YellowFin ICLR 2018

We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.