Table-to-Text Generation is to generate a description from the structured table.
Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers.
We present ToTTo, an open-domain English table-to-text dataset with over 120, 000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
Ranked #2 on
Data-to-Text Generation
on ToTTo
CONDITIONAL TEXT GENERATION DATA-TO-TEXT GENERATION TABLE-TO-TEXT GENERATION
In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.
Ranked #1 on
Table-to-Text Generation
on WikiBio
This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.
Ranked #3 on
Table-to-Text Generation
on WikiBio
CONCEPT-TO-TEXT GENERATION LANGUAGE MODELLING TABLE-TO-TEXT GENERATION
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.