Adaptive Convolution for Multi-Relational Learning

NAACL 2019  ·  Xiaotian Jiang, Quan Wang, Bin Wang ·

We consider the problem of learning distributed representations for entities and relations of multi-relational data so as to predict missing links therein. Convolutional neural networks have recently shown their superiority for this problem, bringing increased model expressiveness while remaining parameter efficient. Despite the success, previous convolution designs fail to model full interactions between input entities and relations, which potentially limits the performance of link prediction. In this work we introduce ConvR, an adaptive convolutional network designed to maximize entity-relation interactions in a convolutional fashion. ConvR adaptively constructs convolution filters from relation representations, and applies these filters across entity representations to generate convolutional features. As such, ConvR enables rich interactions between entity and relation representations at diverse regions, and all the convolutional features generated will be able to capture such interactions. We evaluate ConvR on multiple benchmark datasets. Experimental results show that: (1) ConvR performs substantially better than competitive baselines in almost all the metrics and on all the datasets; (2) Compared with state-of-the-art convolutional models, ConvR is not only more effective but also more efficient. It offers a 7{\%} increase in MRR and a 6{\%} increase in Hits@10, while saving 12{\%} in parameter storage.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Prediction FB15k ConvR MRR 0.782 # 14
Hits@10 0.887 # 11
Hits@3 0.826 # 9
Hits@1 0.720 # 10
Link Prediction FB15k-237 ConvR MRR 0.350 # 31
Hits@10 0.528 # 38
Hits@3 0.385 # 25
Hits@1 0.261 # 26
Link Prediction WN18 ConvR MRR 0.951 # 6
Hits@10 0.958 # 10
Hits@3 0.955 # 2
Hits@1 0.947 # 3
Link Prediction WN18RR ConvR MRR 0.475 # 43
Hits@10 0.537 # 55
Hits@3 0.489 # 36
Hits@1 0.443 # 25

Methods