Unsupervised Dependency Parsing: Let's Use Supervised Parsers

HLT 2015  ·  Phong Le, Willem Zuidema ·

We present a self-training approach to unsupervised dependency parsing that reuses existing supervised and unsupervised parsing algorithms. Our approach, called `iterated reranking' (IR), starts with dependency trees generated by an unsupervised parser, and iteratively improves these trees using the richer probability models used in supervised parsing that are in turn trained on these trees. Our system achieves 1.8% accuracy higher than the state-of-the-part parser of Spitkovsky et al. (2013) on the WSJ corpus.

PDF Abstract HLT 2015 PDF HLT 2015 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Dependency Parsing Penn Treebank Iterative reranking UAS 66.2 # 1

Methods


No methods listed for this paper. Add relevant methods here