Language modeling is the task of predicting the next word or character in a document.
( Image credit: Exploring the Limits of Language Modeling )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present 3 different question-answering models trained on the SQuAD2. 0 dataset -- BIDAF, DocumentQA and ALBERT Retro-Reader -- demonstrating the improvement of language models in the past three years.
We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.
To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.
We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages.
We propose to add independent pseudo quantization noise to model parameters during training to approximate the effect of a quantization operator.
Ranked #13 on Language Modelling on WikiText-103
With advances in neural language models, the focus of linguistic steganography has shifted from edit-based approaches to generation-based ones.
The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks.
Ranked #1 on Named Entity Recognition on BC5CDR (using extra training data)
To be specific, we propose a brand new paradigm of text-guided image generation and manipulation based on the superior characteristics of a pretrained GAN model.
Ranked #1 on Text-to-Image Generation on Multi-Modal-CelebA-HQ