Boosting-GNN: Boosting Algorithm for Graph Networks on Imbalanced Node Classification

25 May 2021  ·  S. Shi, Kai Qiao, Shuai Yang, L. Wang, J. Chen, Bin Yan ·

The Graph Neural Network (GNN) has been widely used for graph data representation. However, the existing researches only consider the ideal balanced dataset, and the imbalanced dataset is rarely considered. Traditional methods such as resampling, reweighting, and synthetic samples that deal with imbalanced datasets are no longer applicable in GNN. This paper proposes an ensemble model called Boosting-GNN, which uses GNNs as the base classifiers during boosting. In Boosting-GNN, higher weights are set for the training samples that are not correctly classified by the previous classifier, thus achieving higher classification accuracy and better reliability. Besides, transfer learning is used to reduce computational cost and increase fitting ability. Experimental results indicate that the proposed Boosting-GNN model achieves better performance than GCN, GraphSAGE, GAT, SGC, N-GCN, and most advanced reweighting and resampling methods on synthetic imbalanced datasets, with an average performance improvement of 4.5%

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods