Lexical Entailment
16 papers with code • 2 benchmarks • 5 datasets
Lexical Entailment is concerned with identifying the semantic relation, if any, holding between two words, as in (pigeon, hyponym, animal).
Source: Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment
Most implemented papers
Hierarchical Density Order Embeddings
By representing words with probability densities rather than point vectors, probabilistic word embeddings can capture rich and interpretable semantic information and uncertainty.
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
TextAttack also includes data augmentation and adversarial training modules for using components of adversarial attacks to improve model accuracy and robustness.
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark
In this paper, we introduce an advanced Russian general language understanding evaluation benchmark -- RussianGLUE.
Experiments with Three Approaches to Recognizing Lexical Entailment
Two general strategies for RLE have been proposed: One strategy is to manually construct an asymmetric similarity measure for context vectors (directional similarity) and another is to treat RLE as a problem of learning to recognize semantic relations using supervised machine learning techniques (relation classification).
Representing Meaning with a Combination of Logical and Distributional Models
In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.
A Consolidated Open Knowledge Representation for Multiple Texts
We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.
Specialising Word Vectors for Lexical Entailment
We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation.
Scoring Lexical Entailment with a Supervised Directional Similarity Network
Experiments show excellent performance on scoring graded lexical entailment, raising the state-of-the-art on the HyperLex dataset by approximately 25%.
Specialising Word Vectors for Lexical Entailment
We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation.
SherLIiC: A Typed Event-Focused Lexical Inference Benchmark for Evaluating Natural Language Inference
We present SherLIiC, a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09.