KB-to-Language Generation
3 papers with code • 1 benchmarks • 1 datasets
Given information from a knowledge base, generate a description of this information in natural language.
Most implemented papers
Investigating Pretrained Language Models for Graph-to-Text Generation
We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.
Describing a Knowledge Base
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
Stage-wise Fine-tuning for Graph-to-Text Generation
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.