Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We present a recurrent neural network based system for automatic quality estimation of natural language generation (NLG) outputs, which jointly learns to assign numerical ratings to individual outputs and to provide pairwise rankings of two different outputs.
In this paper, we present HuggingFace's Transformers library, a library for state-of-the-art NLP, making these developments available to the community by gathering state-of-the-art general-purpose pretrained models under a unified API together with an ecosystem of libraries, examples, tutorials and scripts targeting many downstream NLP tasks.
In this paper, we propose a fully-attentive captioning algorithm which can provide state of the art performances on language generation while restricting its computational demands.
In fighting against fake news, many fact-checking systems comprised of human-based fact-checking sites (e. g., snopes. com and politifact. com) and automatic detection systems have been developed in recent years.
News articles such as sports game reports are often thought to closely follow the underlying game statistics, but in practice they contain a notable amount of background knowledge, interpretation, insight into the game, and quotes that are not present in the official statistics.
The model is unified in that (1) it can be fine-tuned for either vision-language generation (e. g., image captioning) or understanding (e. g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models.
After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks.
In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e. g., description, comparison, planning, etc.).
Generative models for text have substantially contributed to tasks like machine translation and language modeling, using maximum likelihood optimization (MLE).