To this end, we propose a novel Structure-Preserving Graph Representation Learning (SPGRL) method, to fully capture the structure information of graphs.
Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.
Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph.
The goal of few-shot classification is to classify new categories with few labeled examples within each class.
The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL).
Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification.
Few-shot learning aims to recognize new classes with few annotated instances within each category.
Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks, e. g., image classification.
To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions.
Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications.
By formulating graph construction and kernel learning in a unified framework, the graph and consensus kernel can be iteratively enhanced by each other.