Vision GNN: An Image is Worth Graph of Nodes

1 Jun 2022  ·  Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, Enhua Wu ·

Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTorch code is available at https://github.com/huawei-noah/Efficient-AI-Backbones and the MindSpore code is available at https://gitee.com/mindspore/models.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification ImageNet Pyramid ViG-B Top 1 Accuracy 83.7% # 365
Number of params 92.6M # 854
GFLOPs 16.8 # 351
Image Classification ImageNet Pyramid ViG-S Top 1 Accuracy 82.1% # 525
Number of params 27.3M # 623
GFLOPs 4.6 # 215
Image Classification ImageNet Pyramid ViG-M Top 1 Accuracy 83.1% # 426
Number of params 51.7M # 734
GFLOPs 8.9 # 284
Image Classification ImageNet Pyramid ViG-Ti Top 1 Accuracy 78.2% # 778
Number of params 10.7M # 482
GFLOPs 1.7 # 135

Methods