Alpha Matte Generation from Single Input for Portrait Matting

6 Jun 2021  ·  Dogucan Yaman, Hazim Kemal Ekenel, Alexander Waibel ·

In the portrait matting, the goal is to predict an alpha matte that identifies the effect of each pixel on the foreground subject. Traditional approaches and most of the existing works utilized an additional input, e.g., trimap, background image, to predict alpha matte. However, (1) providing additional input is not always practical, and (2) models are too sensitive to these additional inputs. To address these points, in this paper, we introduce an additional input-free approach to perform portrait matting. We divide the task into two subtasks, segmentation and alpha matte prediction. We first generate a coarse segmentation map from the input image and then predict the alpha matte by utilizing the image and segmentation map. Besides, we present a segmentation encoding block to downsample the coarse segmentation map and provide useful feature representation to the residual block, since using a single encoder causes the vanishing of the segmentation information. We tested our model on four different benchmark datasets. The proposed method outperformed the MODNet and MGMatting methods that also take a single input. Besides, we obtained comparable results with BGM-V2 and FBA methods that require additional input.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods