Browse > Natural Language Processing > Language Modelling

Language Modelling

486 papers with code · Natural Language Processing

Language modeling is the task of predicting the next word or character in a document.

* indicates models using dynamic evaluation; where, at test time, models may adapt to seen tokens in order to improve performance on following tokens. (Mikolov et al., (2010), Kraus et al., (2017))

( Image credit: Exploring the Limits of Language Modeling )

Leaderboards

Latest papers with code

Learning Cross-modal Context Graph for Visual Grounding

AAAI-2020 2020 youngfly11/LCMCG-PyTorch

To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task.

GRAPH MATCHING LANGUAGE MODELLING NATURAL LANGUAGE VISUAL GROUNDING PHRASE GROUNDING

9
13 Feb 2020

A Probabilistic Formulation of Unsupervised Text Style Transfer

10 Feb 2020cindyxinyiwang/deep-latent-sequence-model

Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.

LANGUAGE MODELLING STYLE TRANSFER TEXT STYLE TRANSFER UNSUPERVISED MACHINE TRANSLATION

11
10 Feb 2020

[email protected]: Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

10 Feb 2020mryab/learning-at-home

Many recent breakthroughs in deep learning were achieved by training increasingly larger models on massive datasets.

LANGUAGE MODELLING

6
10 Feb 2020

Snippext: Semi-supervised Opinion Mining with Augmented Data

7 Feb 2020rit-git/Snippext_public

A novelty of Snippext is its clever use of a two-prong approach to achieve state-of-the-art (SOTA) performance with little labeled training data through: (1) data augmentation to automatically generate more labeled training data from existing ones, and (2) a semi-supervised learning technique to leverage the massive amount of unlabeled data in addition to the (limited amount of) labeled data.

DATA AUGMENTATION LANGUAGE MODELLING OPINION MINING

2
07 Feb 2020

Parsing as Pretraining

5 Feb 2020aghie/parsing-as-pretraining

We first cast constituent and dependency parsing as sequence tagging.

DEPENDENCY PARSING LANGUAGE MODELLING

1
05 Feb 2020

Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization

22 Jan 2020btaille/contener

Contextualized embeddings use unsupervised language model pretraining to compute word representations depending on their context.

LANGUAGE MODELLING NAMED ENTITY RECOGNITION

4
22 Jan 2020

RobBERT: a Dutch RoBERTa-based Language Model

17 Jan 2020iPieter/RobBERT

Pre-trained language models have been dominating the field of natural language processing in recent years, and have led to significant performance gains for various complex natural language tasks.

LANGUAGE MODELLING SENTIMENT ANALYSIS

75
17 Jan 2020

Block-wise Dynamic Sparseness

14 Jan 2020hadifar/dynamic-sparseness

In this paper, we present a new method for \emph{dynamic sparseness}, whereby part of the computations are omitted dynamically, based on the input.

LANGUAGE MODELLING

1
14 Jan 2020

Reformer: The Efficient Transformer

13 Jan 2020google/trax

Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.

LANGUAGE MODELLING

3,031
13 Jan 2020

Revisiting Challenges in Data-to-Text Generation with Fact Grounding

WS 2019 wanghm92/rw_fg

Data-to-text generation models face challenges in ensuring data fidelity by referring to the correct input source.

DATA-TO-TEXT GENERATION LANGUAGE MODELLING

3
12 Jan 2020