25 papers with code • 8 benchmarks • 6 datasets
Table-to-Text Generation is to generate a description from the structured table.
This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.
In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data
We consider neural table-to-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.