Link Prediction
808 papers with code • 78 benchmarks • 63 datasets
Link Prediction is a task in graph and network analysis where the goal is to predict missing or future connections between nodes in a network. Given a partially observed network, the goal of link prediction is to infer which links are most likely to be added or missing based on the observed connections and the structure of the network.
( Image credit: Inductive Representation Learning on Large Graphs )
Libraries
Use these libraries to find Link Prediction models and implementationsSubtasks
Latest papers
Hierarchical Attention Models for Multi-Relational Graphs
BR-GCN models use bi-level attention to learn node embeddings through (1) node-level attention, and (2) relation-level attention.
Mitigating Heterogeneity among Factor Tensors via Lie Group Manifolds for Tensor Decomposition Based Temporal Knowledge Graph Embedding
Recent studies have highlighted the effectiveness of tensor decomposition methods in the Temporal Knowledge Graphs Embedding (TKGE) task.
GLEMOS: Benchmark for Instantaneous Graph Learning Model Selection
The choice of a graph learning (GL) model (i. e., a GL algorithm and its hyperparameter settings) has a significant impact on the performance of downstream tasks.
MPXGAT: An Attention based Deep Learning Model for Multiplex Graphs Embedding
Graph representation learning has rapidly emerged as a pivotal field of study.
Diffusion-based Negative Sampling on Graphs for Link Prediction
Furthermore, in the context of link prediction, most previous methods sample negative nodes from existing substructures of the graph, missing out on potentially more optimal samples in the latent space.
Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs
To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query.
RepoHyper: Better Context Retrieval Is All You Need for Repository-Level Code Completion
Code Large Language Models (CodeLLMs) have demonstrated impressive proficiency in code completion tasks.
Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling
We refer to this subgraph as a task-oriented subgraph (TOSG), which contains a subset of task-related node and edge types in G. Training the task using TOSG instead of G alleviates the excessive computation required for a large KG.
Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts
In this paper, we discover that there exist cases with distribution shifts unobservable in the time domain while observable in the spectral domain, and propose to study distribution shifts on dynamic graphs in the spectral domain for the first time.
Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models
Knowledge graph completion (KGC) is a widely used method to tackle incompleteness in knowledge graphs (KGs) by making predictions for missing links.