Browse > Natural Language Processing > Natural Language Inference

Natural Language Inference

157 papers with code · Natural Language Processing

Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Leaderboards

Latest papers with code

Transformation of Dense and Sparse Text Representations

7 Nov 2019morning-dews/ST

The key idea of the proposed approach is to use a Forward Transformation to transform dense representations to sparse representations.

NATURAL LANGUAGE INFERENCE TEXT CLASSIFICATION

1
07 Nov 2019

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

2 Nov 2019sinovation/ZEN

Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data.

CHINESE NAMED ENTITY RECOGNITION CHINESE WORD SEGMENTATION DOCUMENT CLASSIFICATION NATURAL LANGUAGE INFERENCE PART-OF-SPEECH TAGGING SENTENCE PAIR MODELING SENTIMENT ANALYSIS

247
02 Nov 2019

MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity

19 Oct 2019huhailinguist/ccg2mono

We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus.

DATA AUGMENTATION NATURAL LANGUAGE INFERENCE

1
19 Oct 2019

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

2 Oct 2019huggingface/transformers

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

LANGUAGE MODELLING LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TRANSFER LEARNING

17,012
02 Oct 2019

A CCG-based Compositional Semantics and Inference System for Comparatives

2 Oct 2019izumi-h/fracas-comparatives_adjectives

Comparative constructions play an important role in natural language inference.

NATURAL LANGUAGE INFERENCE

1
02 Oct 2019

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

26 Sep 2019google-research/google-research

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY

5,388
26 Sep 2019

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

26 Sep 2019dkaushik96/bizarro-data

While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are insensitive to this signal.

DATA AUGMENTATION NATURAL LANGUAGE INFERENCE SENTIMENT ANALYSIS

22
26 Sep 2019

Subword ELMo

18 Sep 2019Jiangtong-Li/Subword-ELMo

Embedding from Language Models (ELMo) has shown to be effective for improving many natural language processing (NLP) tasks, and ELMo takes character information to compose word representation to train language models. However, the character is an insufficient and unnatural linguistic unit for word representation. Thus we introduce Embedding from Subword-aware Language Models (ESuLMo) which learns word representation from subwords using unsupervised segmentation over words. We show that ESuLMo can enhance four benchmark NLP tasks more effectively than ELMo, including syntactic dependency parsing, semantic role labeling, implicit discourse relation recognition and textual entailment, which brings a meaningful improvement over ELMo.

DEPENDENCY PARSING NATURAL LANGUAGE INFERENCE SEMANTIC ROLE LABELING

12
18 Sep 2019

Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases

EMNLP 2019 chrisc36/debias

Our method has two stages: we (1) train a naive model that makes predictions exclusively based on dataset biases, and (2) train a robust model as part of an ensemble with the naive one in order to encourage it to focus on other patterns in the data that are more likely to generalize.

NATURAL LANGUAGE INFERENCE QUESTION ANSWERING VISUAL QUESTION ANSWERING

5
09 Sep 2019