3 papers with code • 1 benchmarks • 1 datasets
Generating natural language text from a conceptualized representation, such as an ontology.
This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination.
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation.