|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The dominant graph-to-sequence transduction models employ graph neural networks for graph representation learning, where the structural information is reflected by the receptive field of neurons.
Generating text from graph-based data, such as Abstract Meaning Representation (AMR), is a challenging task due to the inherent difficulty in how to properly encode the structure of a graph with labeled edges.
We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization.
#2 best model for Data-to-Text Generation on WebNLG
AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs.
#2 best model for Graph-to-Sequence on LDC2015E86:
Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods.
SOTA for Data-to-Text Generation on WebNLG
Many NLP applications can be framed as a graph-to-sequence learning problem.
Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.
SOTA for SQL-to-Text on WikiSQL
Sequence-to-sequence models have shown strong performance across a broad range of applications.