Jointly Non-Sampling Learning for Knowledge Graph Enhanced Recommendation

1 Jul 2020  ·  Chong Chen, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma ·

Knowledge graph (KG) contains well-structured external information and has shown to be effective for high-quality recommendation. However, existing KG enhanced recommendation methods have largely focused on exploring advanced neural network architectures to better investigate the structural information of KG. While for model learning, these methods mainly rely on Negative Sampling (NS) to optimize the models for both KG embedding task and recommendation task. Since NS is not robust (e.g., sampling a small fraction ofnegative instances may lose lots ofuseful information), it is reasonable to argue that these methods are insufficient to capture collaborative information among users, items, and entities. In this paper, we propose a novel Jointly Non-Sampling learning model for Knowledge graph enhanced Recommendation (JNSKR). Specifically, we first design a new efficient NS optimization algorithm for knowledge graph embedding learning. The subgraphs are then encoded by the proposed attentive neural network to better characterize user preference over items. Through novel designs of memorization strategies and joint learning framework, JNSKR not only models the fine-grained connections among users, items, and entities, but also efficiently learns model parameters from the whole training data (including all non-observed data) with a rather low time complexity. Experimental results on two public benchmarks show that JNSKR significantly outperforms the state-of-the-art methods like RippleNet and KGAT. Remarkably, JNSKR also shows significant advantages in training efficiency (about 20 times faster than KGAT), which makes it more applicable to real-world largescale systems.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Recommendation Systems Amazon-Book JNSKR Recall@10 0.1056 # 1
nDCG@10 0.0842 # 1
Recall@20 0.1558 # 1
Recall@40 0.2178 # 1
nDCG@20 0.1068 # 1
nDCG@40 0.1271 # 1


No methods listed for this paper. Add relevant methods here