Browse > Natural Language Processing > Language Modelling

Language Modelling

360 papers with code ยท Natural Language Processing

Language modeling is the task of predicting the next word or character in a document.

* indicates models using dynamic evaluation; where, at test time, models may adapt to seen tokens in order to improve performance on following tokens. (Mikolov et al., (2010), Kraus et al., (2017))

State-of-the-art leaderboards

Latest papers without code

VL-BERT: Pre-training of Generic Visual-Linguistic Representations

22 Aug 2019

We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short).

LANGUAGE MODELLING QUESTION ANSWERING VISUAL COMMONSENSE REASONING VISUAL QUESTION ANSWERING

Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning

22 Aug 2019

We encourage this temporal latent space to capture the 'intention' about how to complete the sentence by mimicking a representation which summarizes the future.

IMAGE CAPTIONING LANGUAGE MODELLING

Latent Relation Language Models

21 Aug 2019

In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations.

LANGUAGE MODELLING

Restricted Recurrent Neural Networks

21 Aug 2019

Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.

LANGUAGE MODELLING

WikiCREM: A Large Unsupervised Corpus for Coreference Resolution

21 Aug 2019

We use a language-model-based approach for pronoun resolution in combination with our WikiCREM dataset.

COREFERENCE RESOLUTION LANGUAGE MODELLING

Encoder-Agnostic Adaptation for Conditional Language Generation

19 Aug 2019

Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks.

LANGUAGE MODELLING TEXT GENERATION

Question Answering based Clinical Text Structuring Using Pre-trained Language Model

19 Aug 2019

Clinical text structuring is a critical and fundamental task for clinical research.

LANGUAGE MODELLING QUESTION ANSWERING

Music Transcription Based on Bayesian Piece-Specific Score Models Capturing Repetitions

18 Aug 2019

Most work on models for music transcription has focused on describing local sequential dependence of notes in musical scores and failed to capture their global repetitive structure, which can be a useful guide for transcribing music.

LANGUAGE MODELLING