Identifying and Explaining Discriminative Attributes

IJCNLP 2019  ·  Armins Stepanjans, André Freitas ·

Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction SemEval 2018 Task 10 Composes explicit vector spaces from WordNet Definitions, ConceptNet and Visual Genome F1-Score 0.69 # 5


No methods listed for this paper. Add relevant methods here