Boosting the Speed of Entity Alignment 10*: Dual Attention Matching Network with Normalized Hard Sample Mining

29 Mar 2021  ·  Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan ·

Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Entity Alignment dbp15k fr-en Dual-AMN Hits@1 0.954 # 7
Entity Alignment dbp15k ja-en Dual-AMN Hits@1 0.892 # 7
Entity Alignment DBP15k zh-en Dual-AMN Hits@1 0.861 # 8
Entity Alignment DICEWS-1K Dual-AMN Hit@1 71.6 # 4
Entity Alignment YAGO-WIKI50K Dual-AMN Hit@1 89.7 # 2

Methods


No methods listed for this paper. Add relevant methods here