Variational Template Machine for Data-to-Text Generation

ICLR 2020  ·  Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, Lei LI ·

How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations. Learning such templates is prohibitive since it often requires a large paired <table, description> corpus, which is seldom available. This paper explores the problem of automatically learning reusable "templates" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b)we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Table-to-Text Generation Wikipedia Person and Animal Dataset VTM BLEU 25.22 # 1
ROUGE 45.36 # 1

Methods


No methods listed for this paper. Add relevant methods here