KBGAN: Adversarial Learning for Knowledge Graph Embeddings

NAACL 2018  ·  Liwei Cai, William Yang Wang ·

We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Link Prediction FB15k-237 KBGAN (TransD + ComplEx) MRR 0.277 # 58
Hits@10 0.458 # 58
Link Prediction WN18 KBGAN (TransD + ComplEx) MRR 0.779 # 30
Hits@10 0.948 # 24
Link Prediction WN18RR KBGAN (TransD + ComplEx) MRR 0.215 # 70
Hits@10 0.469 # 68

Methods