Privacy-Preserving Portrait Matting

29 Apr 2021  ·  Jizhizi Li, Sihan Ma, Jing Zhang, DaCheng Tao ·

Recently, there has been an increasing concern about the privacy issue raised by using personally identifiable information in machine learning. However, previous portrait matting methods were all based on identifiable portrait images. To fill the gap, we present P3M-10k in this paper, which is the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting. P3M-10k consists of 10,000 high-resolution face-blurred portrait images along with high-quality alpha mattes. We systematically evaluate both trimap-free and trimap-based matting methods on P3M-10k and find that existing matting methods show different generalization capabilities when following the Privacy-Preserving Training (PPT) setting, i.e., training on face-blurred images and testing on arbitrary images. To devise a better trimap-free portrait matting model, we propose P3M-Net, which leverages the power of a unified framework for both semantic perception and detail matting, and specifically emphasizes the interaction between them and the encoder to facilitate the matting process. Extensive experiments on P3M-10k demonstrate that P3M-Net outperforms the state-of-the-art methods in terms of both objective metrics and subjective visual quality. Besides, it shows good generalization capacity under the PPT setting, confirming the value of P3M-10k for facilitating future research and enabling potential real-world applications. The source code and dataset are available at https://github.com/JizhiziLi/P3M

PDF Abstract

Datasets


Introduced in the Paper:

P3M-10k

Used in the Paper:

BG-20k

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Matting P3M-10k P3M-Net (r) SAD 8.73 # 3
MSE 0.0026 # 3
MAD 0.0051 # 3

Methods


No methods listed for this paper. Add relevant methods here