KB-to-Language Generation

3 papers with code • 1 benchmarks • 1 datasets

Given information from a knowledge base, generate a description of this information in natural language.

Stage-wise Fine-tuning for Graph-to-Text Generation

EagleW/Stage-wise-Fine-tuning ACL 2021

Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.

26
17 May 2021

Investigating Pretrained Language Models for Graph-to-Text Generation

bjascob/amrlib EMNLP (NLP4ConvAI) 2021

We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.

210
16 Jul 2020

Describing a Knowledge Base

EagleW/Describing_a_Knowledge_Base WS 2018

We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).

51
06 Sep 2018