Natural Language Inference

369 papers with code • 24 benchmarks • 54 datasets

Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Greatest papers with code

Big Bird: Transformers for Longer Sequences

tensorflow/models NeurIPS 2020

To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.

 Ranked #1 on Question Answering on Natural Questions (F1 (Long) metric)

Linguistic Acceptability Natural Language Inference +3

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

tensorflow/models NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Common Sense Reasoning Conversational Response Selection +6

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

tensorflow/models ICLR 2020

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

Common Sense Reasoning Linguistic Acceptability +4

I-BERT: Integer-only BERT Quantization

huggingface/transformers 5 Jan 2021

Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks.

Natural Language Inference Natural Language Understanding +1

mT5: A massively multilingual pre-trained text-to-text transformer

huggingface/transformers NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

Common Sense Reasoning Natural Language Inference +2

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Common Sense Reasoning Coreference Resolution +9

FlauBERT: Unsupervised Language Model Pre-training for French

huggingface/transformers LREC 2020

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.

Language Modelling Natural Language Inference +2

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

huggingface/transformers ACL 2020

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

Abstractive Text Summarization Denoising +4