Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Additionally, these models are typically trained via maxi- mum likelihood and teacher forcing.
#3 best model for Multivariate Time Series Imputation on Basketball Players Movement
In this paper, we present Huggingface's Transformers library, a library for state-of-the-art NLP, making these developments available to the community by gathering state-of-the-art general-purpose pretrained models under a unified API together with an ecosystem of libraries, examples, tutorials and scripts targeting many downstream NLP tasks.
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
SOTA for Language Modelling on Text8 (using extra training data)
This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time.
We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.
fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks.
We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
In this work, we introduce a model and beam-search training scheme, based on the work of Daume III and Marcu (2005), that extends seq2seq to learn global sequence scores.
#16 best model for Machine Translation on IWSLT2015 German-English