Dynamic Link Prediction
12 papers with code • 2 benchmarks • 6 datasets
LibrariesUse these libraries to find Dynamic Link Prediction models and implementations
Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics.
We propose DyGFormer, a new Transformer-based architecture for dynamic graph learning that solely learns from the sequences of nodes' historical first-hop interactions.
We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes).
We consider a common case in which edges can be short term interactions (e. g., messaging) or long term structural connections (e. g., friendship).
Learning Self-Modulating Attention in Continuous Time Space with Applications to Sequential Recommendation
User interests are usually dynamic in the real world, which poses both theoretical and practical challenges for learning accurate preferences from rich behavior data.
To evaluate against more difficult negative edges, we introduce two more challenging negative sampling strategies that improve robustness and better match real-world applications.