Many Network Representation Learning (NRL) methods have been proposed to learn vector representations for vertices in a network recently.
In this direction, document network embedding methods seem to be an ideal choice for building representations of the scientific literature.
Since real-world objects and their interactions are often multi-modal and multi-typed, heterogeneous networks have been widely used as a more powerful, realistic, and generic superclass of traditional homogeneous networks (graphs).
Network embedding, aiming to project a network into a low-dimensional space, is increasingly becoming a focus of network research.
We train these word and topic vectors through our general model, Inductive Document Network Embedding (IDNE), by leveraging the connections in the document network.
Latent factor models for community detection aim to find a distributed and generally low-dimensional representation, or coding, that captures the structural regularity of network and reflects the community membership of nodes.
Inspired by the concept of user schema in social psychology, we take a new perspective to perform user representation learning by constructing a shared latent space to capture the dependency among different modalities of user-generated data.
This paper considers a novel variational formulation of network embeddings, with special focus on textual networks.
Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph.