Language Models for Lexical Inference in Context

EACL 2021  ·  Martin Schmitt, Hinrich Schütze ·

Lexical inference in context (LIiC) is the task of recognizing textual entailment between two very similar sentences, i.e., sentences that only differ in one expression. It can therefore be seen as a variant of the natural language inference task that is focused on lexical semantics. We formulate and evaluate the first approaches based on pretrained language models (LMs) for this task: (i) a few-shot NLI classifier, (ii) a relation induction approach based on handcrafted patterns expressing the semantics of lexical inference, and (iii) a variant of (ii) with patterns that were automatically extracted from a corpus. All our approaches outperform the previous state of the art, showing the potential of pretrained LMs for LIiC. In an extensive analysis, we investigate factors of success and failure of our three approaches.

PDF Abstract EACL 2021 PDF EACL 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Few-Shot NLI SherLIiC MANPAT^PhiPsi (RoBERTa-large) F1 72.6 # 2

Methods


No methods listed for this paper. Add relevant methods here