KB-to-Language Generation

3 papers with code • 1 benchmarks • 1 datasets

Given information from a knowledge base, generate a description of this information in natural language.

Most implemented papers

Investigating Pretrained Language Models for Graph-to-Text Generation

UKPLab/plms-graph2text EMNLP (NLP4ConvAI) 2021

We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.

Describing a Knowledge Base

EagleW/Describing_a_Knowledge_Base WS 2018

We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).

Stage-wise Fine-tuning for Graph-to-Text Generation

EagleW/Stage-wise-Fine-tuning ACL 2021

Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.