Paper

Boosting-GNN: Boosting Algorithm for Graph Networks on Imbalanced Node Classification

The Graph Neural Network (GNN) has been widely used for graph data representation. However, the existing researches only consider the ideal balanced dataset, and the imbalanced dataset is rarely considered. Traditional methods such as resampling, reweighting, and synthetic samples that deal with imbalanced datasets are no longer applicable in GNN. This paper proposes an ensemble model called Boosting-GNN, which uses GNNs as the base classifiers during boosting. In Boosting-GNN, higher weights are set for the training samples that are not correctly classified by the previous classifier, thus achieving higher classification accuracy and better reliability. Besides, transfer learning is used to reduce computational cost and increase fitting ability. Experimental results indicate that the proposed Boosting-GNN model achieves better performance than GCN, GraphSAGE, GAT, SGC, N-GCN, and most advanced reweighting and resampling methods on synthetic imbalanced datasets, with an average performance improvement of 4.5%

Results in Papers With Code
(↓ scroll down to see all results)