Semantic Image Matting

CVPR 2021  ยท  Yanan sun, Chi-Keung Tang, Yu-Wing Tai ยท

Natural image matting separates the foreground from background in fractional occupancy which can be caused by highly transparent objects, complex foreground (e.g., net or tree), and/or objects containing very fine details (e.g., hairs). Although conventional matting formulation can be applied to all of the above cases, no previous work has attempted to reason the underlying causes of matting due to various foreground semantics. We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions. Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap. The proposed semantic trimap can be obtained automatically through patch structure analysis within trimap regions. Meanwhile, we learn a multi-class discriminator to regularize the alpha prediction at semantic level, and content-sensitive weights to balance different regularization losses. Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance. Finally, we contribute a large-scale Semantic Image Matting Dataset with careful consideration of data balancing across different semantic classes.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Matting Composition-1K SIM MSE 5.8 # 10
SAD 28.0 # 9
Grad 10.8 # 9
Conn 24.8 # 9
Semantic Image Matting Semantic Image Matting Dataset SIM SAD 27.87 # 1
MSE(10^3) 4.7 # 1
Grad 11.57 # 1
Conn 20.83 # 1

Methods


No methods listed for this paper. Add relevant methods here