Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection.
We present a contrasting learning approach with data augmentation techniques to learn document representations in an unsupervised manner.
The cross-view association model is learned to bridge the embeddings of ontological concepts and their corresponding instance-view entities.
Recent studies have demonstrated the vulnerability of deep neural networks against adversarial examples.
As texts always contain a large proportion of task-irrelevant words, accurate alignment between aspects and their sentimental descriptions is the most crucial and challenging step.
Graph Neural Networks (GNNs) have shown to be powerful tools for graph analytics.
The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to a lack of generalizability and hindering it from being used in the inductive setting.
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.
Graph representation learning, aiming to learn low-dimensional representations which capture the geometric dependencies between nodes in the original graph, has gained increasing popularity in a variety of graph analysis tasks, including node classification and link prediction.
Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.
The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure.