Mutual Teaching for Graph Convolutional Networks

2 Sep 2020  ยท  Kun Zhan, Chaoxi Niu ยท

Graph convolutional networks produce good predictions of unlabeled samples due to its transductive label propagation. Since samples have different predicted confidences, we take high-confidence predictions as pseudo labels to expand the label set so that more samples are selected for updating models. We propose a new training method named as mutual teaching, i.e., we train dual models and let them teach each other during each batch. First, each network feeds forward all samples and selects samples with high-confidence predictions. Second, each model is updated by samples selected by its peer network. We view the high-confidence predictions as useful knowledge, and the useful knowledge of one network teaches the peer network with model updating in each batch. In mutual teaching, the pseudo-label set of a network is from its peer network. Since we use the new strategy of network training, performance improves significantly. Extensive experimental results demonstrate that our method achieves superior performance over state-of-the-art methods under very low label rates.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification CiteSeer (0.5%) MT-GCN Accuracy 67.7% # 1
Node Classification CiteSeer (1%) MT-GCN Accuracy 68.9% # 3
Node Classification Cora MT-GCN Accuracy 80.9% # 63
Node Classification Cora (0.5%) MT-GCN Accuracy 66.9% # 7
Node Classification Cora (1%) MT-GCN Accuracy 73.1% # 6
Node Classification Cora (3%) MT-GCN Accuracy 78.5% # 7
Node Classification PubMed (0.03%) MT-GCN Accuracy 65.5% # 4
Node Classification PubMed (0.05%) MT-GCN Accuracy 69.5% # 4
Node Classification PubMed (0.1%) MT-GCN Accuracy 73.1% # 7

Methods


No methods listed for this paper. Add relevant methods here