Inductive Link Prediction
20 papers with code • 3 benchmarks • 3 datasets
In inductive link prediction inference is performed on a new, unseen graph whereas classical transductive link prediction performs both training and inference on the same graph.
In this work, we classify different inductive settings and study the benefits of employing hyper-relational KGs on a wide range of semi- and fully inductive link prediction tasks powered by recent advancements in graph neural networks.
Such a dictionary representation records a downsampled set of the neighboring nodes as keys, and allows fast construction of structural features for a joint neighborhood of multiple nodes.
Despite the importance of inductive link prediction, most previous works focused on transductive link prediction and cannot manage previously unseen entities.
TACT is inspired by the observation that the semantic correlation between two relations is highly correlated to their topological structure in knowledge graphs.
An emerging trend in representation learning over knowledge graphs (KGs) moves beyond transductive link prediction tasks over a fixed set of known entities in favor of inductive tasks that imply training on one graph and performing inference over a new graph with unseen entities.
Inductive link prediction for knowledge graph aims at predicting missing links between unseen entities, those not shown in training stage.
A more challenging scenario is that emerging KGs consist of only unseen entities, called as disconnected emerging KGs (DEKGs).