Improving Neural Parsing by Disentangling Model Combination and Reranking Effects

ACL 2017  ·  Daniel Fried, Mitchell Stern, Dan Klein ·

Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.

PDF Abstract ACL 2017 PDF ACL 2017 Abstract
No code implementations yet. Submit your code now

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Constituency Parsing Penn Treebank Model combination F1 score 94.66 # 14

Methods


No methods listed for this paper. Add relevant methods here