PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols

NAACL 2021  ·  Songlin Yang, Yanpeng Zhao, Kewei Tu ·

Probabilistic context-free grammars (PCFGs) with neural parameterization have been shown to be effective in unsupervised phrase-structure grammar induction. However, due to the cubic computational complexity of PCFG representation and parsing, previous approaches cannot scale up to a relatively large number of (nonterminal and preterminal) symbols. In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols. We further use neural parameterization for the new form to improve unsupervised parsing performance. We evaluate our model across ten languages and empirically demonstrate the effectiveness of using more symbols. Our code: https://github.com/sustcsonglin/TN-PCFG

PDF Abstract NAACL 2021 PDF NAACL 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Constituency Grammar Induction PTB Diagnostic ECG Database TN-PCFG (p=500) Max F1 (WSJ) 61.4 # 4
Mean F1 (WSJ) 57.7 # 7

Methods


No methods listed for this paper. Add relevant methods here