ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models

EMNLP 2021  ยท  Pierre L. Dognin, Inkit Padhi, Igor Melnyk, Payel Das ยท

Automatic construction of relevant Knowledge Bases (KBs) from text, and generation of semantically meaningful text from KBs are both long-standing goals in Machine Learning. In this paper, we present ReGen, a bidirectional generation of text and graph leveraging Reinforcement Learning (RL) to improve performance. Graph linearization enables us to re-frame both tasks as a sequence to sequence generation problem regardless of the generative direction, which in turn allows the use of Reinforcement Learning for sequence training where the model itself is employed as its own critic leading to Self-Critical Sequence Training (SCST). We present an extensive investigation demonstrating that the use of RL via SCST benefits graph and text generation on WebNLG+ 2020 and TekGen datasets. Our system provides state-of-the-art results on WebNLG+ 2020 by significantly improving upon published results from the WebNLG 2020+ Challenge for both text-to-graph and graph-to-text generation tasks.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Joint Entity and Relation Extraction TekGen ReGen-SCST F1 62.3 # 1
Joint Entity and Relation Extraction TekGen ReGen-CE F1 61.9 # 2
Joint Entity and Relation Extraction WebNLG 3.0 ReGen (Ours) T2G.RL F1 72 # 4
Joint Entity and Relation Extraction WebNLG 3.0 ReGen (Ours) T2G.CE F1 72.3 # 1
Joint Entity and Relation Extraction WebNLG 3.0 bt5 (agarwal-etal-2020-machine) F1 68.2 # 7
Joint Entity and Relation Extraction WebNLG 3.0 Amazon AI (Shanghai) (guo-etal-2020-2) F1 68.9 # 5

Methods


REINFORCE โ€ข SCST