ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers

24 May 2023  ·  Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang ·

Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and present a new efficient and robust ViT-based matting system, named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an excellent performance-computation trade-off in matting tasks. (ii) Additionally, we introduce the detail capture module, which just consists of simple lightweight convolutions to complement the detailed information required by matting. To the best of our knowledge, ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many superior properties from ViT to matting, including various pretraining strategies, concise architecture design, and flexible inference strategies. We evaluate ViTMatte on Composition-1k and Distinctions-646, the most commonly used benchmark for image matting, our method achieves state-of-the-art performance and outperforms prior matting works by a large margin.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Matting Composition-1K ViTMatte MSE 3.0 # 3
SAD 20.33 # 3
Grad 6.74 # 4
Conn 14.78 # 3
Image Matting Distinctions-646 ViTMatte SAD 17.05 # 2
MSE 0.0015 # 1
Grad 7.03 # 1
Conn 12.95 # 1
Trimap # 1

Methods