View-Consistent Heterogeneous Network on Graphs With Few Labeled Nodes

Performing transductive learning on graphs with very few labeled data, that is, two or three samples for each category, is challenging due to the lack of supervision. In the existing work, self-supervised learning via a single view model is widely adopted to address the problem. However, recent observation shows multiview representations of an object share the same semantic information in high-level feature space. For each sample, we generate heterogeneous representations and use view-consistency loss to make their representations consistent with each other. Multiview representation also inspires to supervise the pseudolabels generation by the aid of mutual supervision between views. In this article, we thus propose a view-consistent heterogeneous network (VCHN) to learn better representations by aligning view-agnostic semantics. Specifically, VCHN is constructed by constraining the predictions between two views so that the view pairs can supervise each other. To make the best use of cross-view information, we further propose a novel training strategy to generate more reliable pseudolabels, which thus enhances predictions of the VCHN. Extensive experimental results on three benchmark datasets demonstrate that our method achieves superior performance over state-of-the-art methods under very low label rates.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Node Classification CiteSeer (0.5%) VCHN Accuracy 65.6% # 2
Node Classification CiteSeer (1%) VCHN Accuracy 70.1% # 1
Node Classification Cora (0.5%) VCHN Accuracy 74.9% # 2
Node Classification Cora (1%) VHCN Accuracy 78.1% # 3
Node Classification Cora (3%) VCHN Accuracy 83.1% # 2
Node Classification PubMed (0.03%) VCHN Accuracy 71.8% # 1
Node Classification PubMed (0.05%) VCHN Accuracy 74.3% # 1
Node Classification PubMed (0.1%) VCHN Accuracy 76.8% # 2

Methods


No methods listed for this paper. Add relevant methods here