TAR: Neural Logical Reasoning across TBox and ABox

29 May 2022  ·  Zhenwei Tang, Shichao Pei, Xi Peng, Fuzhen Zhuang, Xiangliang Zhang, Robert Hoehndorf ·

Many ontologies, i.e., Description Logic (DL) knowledge bases, have been developed to provide rich knowledge about various domains. An ontology consists of an ABox, i.e., assertion axioms between two entities or between a concept and an entity, and a TBox, i.e., terminology axioms between two concepts. Neural logical reasoning (NLR) is a fundamental task to explore such knowledge bases, which aims at answering multi-hop queries with logical operations based on distributed representations of queries and answers. While previous NLR methods can give specific entity-level answers, i.e., ABox answers, they are not able to provide descriptive concept-level answers, i.e., TBox answers, where each concept is a description of a set of entities. In other words, previous NLR methods only reason over the ABox of an ontology while ignoring the TBox. In particular, providing TBox answers enables inferring the explanations of each query with descriptive concepts, which make answers comprehensible to users and are of great usefulness in the field of applied ontology. In this work, we formulate the problem of neural logical reasoning across TBox and ABox (TA-NLR), solving which needs to address challenges in incorporating, representing, and operating on concepts. We propose an original solution named TAR for TA-NLR. Firstly, we incorporate description logic based ontological axioms to provide the source of concepts. Then, we represent concepts and queries as fuzzy sets, i.e., sets whose elements have degrees of membership, to bridge concepts and queries with entities. Moreover, we design operators involving concepts on top of fuzzy set representation of concepts and queries for optimization and inference. Extensive experimental results on two real-world datasets demonstrate the effectiveness of TAR for TA-NLR.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods