Table-to-Text Generation
38 papers with code • 8 benchmarks • 6 datasets
Table-to-Text Generation is to generate a description from the structured table.
Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Most implemented papers
Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data
We consider neural table-to-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.
Variational Template Machine for Data-to-Text Generation
We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables.
ToTTo: A Controlled Table-To-Text Generation Dataset
We present ToTTo, an open-domain English table-to-text dataset with over 120, 000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
Stepwise Extractive Summarization and Planning with Structured Transformers
We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers.
Enhancing Content Planning for Table-to-Text Generation with Data Understanding and Verification
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
Controlling Hallucinations at Word Level in Data-to-Text Generation
Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance.
Towards Faithfulness in Open Domain Table-to-text Generation from an Entity-centric View
In open domain table-to-text generation, we notice that the unfaithful generation usually contains hallucinated content which can not be aligned to any input table record.