Structured Training for Neural Network Transition-Based Parsing

We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.

PDF Abstract IJCNLP 2015 PDF IJCNLP 2015 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dependency Parsing Penn Treebank Weiss et al. POS 97.3 # 5
UAS 94.01 # 19
LAS 92.06 # 18

Methods


No methods listed for this paper. Add relevant methods here