Developing scalable solutions for training Graph Neural Networks (GNNs) for link prediction tasks is challenging due to the high data dependencies which entail high computational cost and huge memory footprint.
Multi-target domain adaptation is a powerful extension in which a single classifier is learned for multiple unlabeled target domains.
Knowledge graph embedding methods learn embeddings of entities and relations in a low dimensional space which can be used for various downstream machine learning tasks such as link prediction and entity matching.
Furthermore, the expressivity of the learned representation depends on the quality of negative samples used during training.
In recommender systems (RSs), predicting the next item that a user interacts with is critical for user retention.
Most studies ignore the directionality, so as to learn high-quality representations optimized for node classification.