Natural Language Understanding

270 papers with code • 4 benchmarks • 43 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Greatest papers with code

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

tensorflow/models ICLR 2020

Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not.

Language Modelling Natural Language Understanding +2

Neural Architecture Search with Reinforcement Learning

tensorflow/models 5 Nov 2016

Our cell achieves a test set perplexity of 62. 4 on the Penn Treebank, which is 3. 6 perplexity better than the previous state-of-the-art model.

Image Classification Language Modelling +2

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

tensorflow/models NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Common Sense Reasoning Conversational Response Selection +6

I-BERT: Integer-only BERT Quantization

huggingface/transformers 5 Jan 2021

Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks.

Natural Language Inference Natural Language Understanding +1

BARThez: a Skilled Pretrained French Sequence-to-Sequence Model

huggingface/transformers 23 Oct 2020

We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT.

 Ranked #1 on Text Summarization on OrangeSum (using extra training data)

Natural Language Understanding Self-Supervised Learning +2

Optimal Subarchitecture Extraction For BERT

huggingface/transformers 20 Oct 2020

We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search.

Natural Language Understanding Neural Architecture Search

Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation

huggingface/transformers 15 Aug 2020

Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level semantic language understanding (LU).

Language Modelling Natural Language Understanding

ConvBERT: Improving BERT with Span-based Dynamic Convolution

huggingface/transformers NeurIPS 2020

The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning.

Natural Language Understanding

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Common Sense Reasoning Coreference Resolution +9

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

huggingface/transformers TACL 2020

Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.

Machine Translation Natural Language Understanding +3