Browse > Natural Language Processing > Natural Language Inference

Natural Language Inference

174 papers with code · Natural Language Processing

Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Leaderboards

Greatest papers with code

Deep contextualized word representations

NAACL 2018 zalandoresearch/flair

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

CITATION INTENT CLASSIFICATION COREFERENCE RESOLUTION LANGUAGE MODELLING NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS

A Structured Self-attentive Sentence Embedding

9 Mar 2017facebookresearch/pytext

This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention.

NATURAL LANGUAGE INFERENCE SENTENCE EMBEDDING SENTIMENT ANALYSIS

XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 zihangdai/xlnet

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DENOISING DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SENTIMENT ANALYSIS

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

29 Jul 2019PaddlePaddle/ERNIE

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.

LINGUISTIC ACCEPTABILITY MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS

Pre-Training with Whole Word Masking for Chinese BERT

19 Jun 2019ymcui/Chinese-BERT-wwm

In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task.

DOCUMENT CLASSIFICATION LANGUAGE MODELLING MACHINE READING COMPREHENSION NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE SENTIMENT ANALYSIS

XNLI: Evaluating Cross-lingual Sentence Representations

EMNLP 2018 facebookresearch/XLM

State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models.

CROSS-LINGUAL NATURAL LANGUAGE INFERENCE MACHINE TRANSLATION

Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

ICLR 2018 facebookresearch/InferSent

In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.

MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION SEMANTIC TEXTUAL SIMILARITY