We follow the step-by-step approach to neural data-to-text generation we proposed in Moryossef et al (2019), in which the generation process is divided into a text-planning stage followed by a plan-realization stage.
Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data.
Keyphrase generation is the task of predicting a set of lexical units that conveys the main content of a source text.
We investigate the impact of search strategies in neural dialogue modeling.
We apply this framework to existing datasets and models and show that: (1) the pivot words are strong features for the classification of sentence attributes; (2) to change the attribute of a sentence, many datasets only requires to change certain pivot words; (3) consequently, many transfer models only perform the lexical-level modification, while leaving higher-level sentence structures unchanged.
We present a recurrent neural network based system for automatic quality estimation of natural language generation (NLG) outputs, which jointly learns to assign numerical ratings to individual outputs and to provide pairwise rankings of two different outputs.
Data-to-text generation models face challenges in ensuring data fidelity by referring to the correct input source.
We present a creative poem generator for the morphologically rich Finnish language.
Deep neural networks (DNN) are quickly becoming the de facto standard modeling method for many natural language generation (NLG) tasks.
The writing process consists of several stages such as drafting, revising, editing, and proofreading.