HIT-SCIR at MRP 2019: A Unified Pipeline for Meaning Representation Parsing via Efficient Training and Effective Encoding

This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing. We extended the basic transition-based parser with two improvements: a) Efficient Training by realizing Stack LSTM parallel training; b) Effective Encoding via adopting deep contextualized word embeddings BERT. Generally, we proposed a unified pipeline to meaning representation parsing, including framework-specific transition-based parsers, BERT-enhanced word representation, and post-processing. In the final evaluation, our system was ranked first according to ALL-F1 (86.2{\%}) and especially ranked first in UCCA framework (81.67{\%}).

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
UCCA Parsing CoNLL 2019 Transition-based (+BERT + Efficient Training + Effective Encoding) Full UCCA F1 66.7 # 1
Full MRP F1 81.7 # 1
LPP UCCA F1 64.4 # 2
LPP MRP F1 82.6 # 1

Methods