Neural data-to-text generation: A comparison between pipeline and end-to-end architectures

IJCNLP 2019  ·  Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, Emiel Krahmer ·

Traditionally, most data-to-text applications have been designed using a modular pipeline architecture, in which non-linguistic input data is converted into natural language through several intermediate transformations. In contrast, recent neural models for data-to-text generation have been proposed as end-to-end approaches, where the non-linguistic input is rendered in natural language with much less explicit intermediate representations in-between. This study introduces a systematic comparison between neural pipeline and end-to-end data-to-text approaches for the generation of text from RDF triples. Both architectures were implemented making use of state-of-the art deep learning methods as the encoder-decoder Gated-Recurrent Units (GRU) and Transformer. Automatic and human evaluations together with a qualitative analysis suggest that having explicit intermediate steps in the generation process results in better texts than the ones generated by end-to-end approaches. Moreover, the pipeline models generalize better to unseen inputs. Data and code are publicly available.

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Data-to-Text Generation WebNLG E2E GRU BLEU 57.20 # 13
Data-to-Text Generation WebNLG Full Transformer (Pipeline) BLEU 51.68 # 8

Methods