An Empirical Study of Building a Strong Baseline for Constituency Parsing

ACL 2018 Jun SuzukiSho TakaseHidetaka KamigaitoMakoto MorishitaMasaaki Nagata

This paper investigates the construction of a strong baseline based on general purpose sequence-to-sequence models for constituency parsing. We incorporate several techniques that were mainly developed in natural language generation tasks, e.g., machine translation and summarization, and demonstrate that the sequence-to-sequence model achieves the current top-notch parsers{'} performance (almost) without requiring any explicit task-specific knowledge or architecture of constituent parsing...

PDF Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK BENCHMARK
Constituency Parsing Penn Treebank LSTM Encoder-Decoder + LSTM-LM F1 score 94.32 # 7

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet