Learning Conceptual Space Representations of Interrelated Concepts

3 May 2018  ·  Zied Bouraoui, Steven Schockaert ·

Several recently proposed methods aim to learn conceptual space representations from large text collections. These learned representations asso- ciate each object from a given domain of interest with a point in a high-dimensional Euclidean space, but they do not model the concepts from this do- main, and can thus not directly be used for catego- rization and related cognitive tasks. A natural solu- tion is to represent concepts as Gaussians, learned from the representations of their instances, but this can only be reliably done if sufficiently many in- stances are given, which is often not the case. In this paper, we introduce a Bayesian model which addresses this problem by constructing informative priors from background knowledge about how the concepts of interest are interrelated with each other. We show that this leads to substantially better pre- dictions in a knowledge base completion task.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here