Browse > Natural Language Processing > Language Modelling

Language Modelling

238 papers with code · Natural Language Processing

Language modeling is the task of predicting the next word or character in a document.

* indicates models using dynamic evaluation; where, at test time, models may adapt to seen tokens in order to improve performance on following tokens. (Mikolov et al., (2010), Kraus et al., (2017))

State-of-the-art leaderboards

Latest papers without code

Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching

ICLR 2019 Chih-Kuan Yeh et al

Experimental results on TIMIT dataset demonstrate the success of this fully unsupervised phoneme recognition system, which achieves a phone error rate (PER) of 41.6%.

LANGUAGE MODELLING SPEECH RECOGNITION

01 May 2019

Learning Recurrent Binary/Ternary Weights

ICLR 2019 Arash Ardakani et al

On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) and gated recurrent units (GRUs) on various sequential models including sequence classification and language modeling.

LANGUAGE MODELLING

01 May 2019

Variational Smoothing in Recurrent Neural Network Language Models

ICLR 2019 Lingpeng Kong et al

We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017).

LANGUAGE MODELLING

01 May 2019

Pay Less Attention with Lightweight and Dynamic Convolutions

ICLR 2019 Felix Wu et al

We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.

ABSTRACTIVE TEXT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION

01 May 2019

Machine Reading Comprehension for Answer Re-Ranking in Customer Support Chatbots

12 Feb 2019Momchil Hardalov et al

Recent advances in deep neural networks, language modeling and language generation have introduced new ideas to the field of conversational agents.

LANGUAGE MODELLING MACHINE READING COMPREHENSION TEXT GENERATION

12 Feb 2019

BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

11 Feb 2019Alex Wang et al

We show that BERT (Devlin et al., 2018) is a Markov random field language model.

LANGUAGE MODELLING

11 Feb 2019

Compression of Recurrent Neural Networks for Efficient Language Modeling

6 Feb 2019Artem M. Grachev et al

We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling.

LANGUAGE MODELLING

06 Feb 2019

Model Unit Exploration for Sequence-to-Sequence Speech Recognition

5 Feb 2019Kazuki Irie et al

We also conduct a detailed analysis of the various models, and investigate their complementarity: we find that we can improve WERs by up to 9% relative by rescoring N-best lists generated from the word-piece model with either the phoneme or the grapheme model.

LANGUAGE MODELLING SEQUENCE-TO-SEQUENCE SPEECH RECOGNITION

05 Feb 2019

The Referential Reader: A Recurrent Entity Network for Anaphora Resolution

5 Feb 2019Fei Liu et al

We present a new architecture for storing and accessing entity mentions during online text processing.

LANGUAGE MODELLING

05 Feb 2019

Review Conversational Reading Comprehension

3 Feb 2019Hu Xu et al

Seeking information about products and services is an important activity of online consumers before making a purchase decision.

LANGUAGE MODELLING MACHINE READING COMPREHENSION

03 Feb 2019