KG-to-Text Generation

17 papers with code • 11 benchmarks • 9 datasets

Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs.

Description from: JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

Most implemented papers

Text Generation from Knowledge Graphs with Graph Transformers

rikdz/GraphWriter NAACL 2019

Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.

Investigating Pretrained Language Models for Graph-to-Text Generation

UKPLab/plms-graph2text EMNLP (NLP4ConvAI) 2021

We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.

Deep Graph Convolutional Encoders for Structured Data to Text Generation

diegma/graph-2-text WS 2018

Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods.

Handling Rare Items in Data-to-Text Generation

shimorina/webnlg-dataset WS 2018

Neural approaches to data-to-text generation generally handle rare input items using either delexicalisation or a copy mechanism.

Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs

UKPLab/kg2text 29 Jan 2020

Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations.

Toward Subgraph-Guided Knowledge Graph Question Generation with Graph Neural Networks

hugochan/Graph2Seq-for-KGQG 13 Apr 2020

In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.

ENT-DESC: Entity Description Generation by Exploring Knowledge Graph

LiyingCheng95/EntityDescriptionGeneration EMNLP 2020

Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.

KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation

wenhuchen/KGPT EMNLP 2020

We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.

How to Train Your Agent to Read and Write

menggehe/DRAW 4 Jan 2021

Typically, this requires an agent to fully understand the knowledge from the given text materials and generate correct and fluent novel paragraphs, which is very challenging in practice.

Few-shot Knowledge Graph-to-Text Generation with Pretrained Language Models

RUCAIBox/Few-Shot-KG2Text Findings (ACL) 2021

This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG).