Data-to-text generation is the task of generating text from a data source.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems.
#3 best model for Data-to-Text Generation on E2E NLG Challenge
This paper describes the E2E data, a new dataset for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.
Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records.
#2 best model for Data-to-Text Generation on RotoWire (Relation Generation)
Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods.
SOTA for Data-to-Text Generation on WebNLG
Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order.
We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization.
#2 best model for Data-to-Text Generation on WebNLG
Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end.
Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions.
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.