4 papers with code • 1 benchmarks • 1 datasets
Generating natural language text from a conceptualized representation, such as an ontology.
This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination.
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation.
Recent advances in text-to-image synthesis make it possible to visualize machine imaginations for a given context.