Language modeling is the task of predicting the next word or character in a document.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding.
#8 best model for Language Modelling on One Billion Word
We propose a new benchmark corpus to be used for measuring progress in statistical language modeling.
#14 best model for Language Modelling on One Billion Word
Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.
We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space.
In this paper we describe an extension of the Kaldi software toolkit to support neural-based language modeling, intended for use in automatic speech recognition (ASR) and related tasks.
This paper describes a new baseline system for automatic speech recognition (ASR) in the CHiME-4 challenge to promote the development of noisy ASR in speech processing communities by providing 1) state-of-the-art system with a simplified single system comparable to the complicated top systems in the challenge, 2) publicly available and reproducible recipe through the main repository in the Kaldi speech recognition toolkit.
Models trained with LFMMI provide a relative word error rate reduction of ∼11. 5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions.
SOTA for Speech Recognition on WSJ eval92
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
#2 best model for Coreference Resolution on CoNLL 2012