Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
This paper describes the primary system submitted by the author to the E2E NLG Challenge on the E2E Dataset (Novikova et al. (2017)).
Ranked #8 on Data-to-Text Generation on E2E NLG Challenge