Adversarially-Guided Portrait Matting

4 May 2023  ·  Sergej Chicherin, Karen Efremyan ·

We present a method for generating alpha mattes using a limited data source. We pretrain a novel transformerbased model (StyleMatte) on portrait datasets. We utilize this model to provide image-mask pairs for the StyleGAN3-based network (StyleMatteGAN). This network is trained unsupervisedly and generates previously unseen imagemask training pairs that are fed back to StyleMatte. We demonstrate that the performance of the matte pulling network improves during this cycle and obtains top results on the human portraits and state-of-the-art metrics on animals dataset. Furthermore, StyleMatteGAN provides high-resolution, privacy-preserving portraits with alpha mattes, making it suitable for various image composition tasks. Our code is available at https://github.com/chroneus/stylematte

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Matting AM-2K StyleMatte SAD 9.602 # 1
MSE 0.0024 # 1
MAD 0.0055 # 1
Image Matting P3M-10k StyleMatte SAD 6.97 # 2
MSE 0.0019 # 2
MAD 0.004 # 2

Methods


No methods listed for this paper. Add relevant methods here