Face Parsing
15 papers with code • 4 benchmarks • 4 datasets
Classify pixels of a face image into different classes based on a given bounding box.
Most implemented papers
RoI Tanh-polar Transformer Network for Face Parsing in the Wild
Face parsing aims to predict pixel-wise labels for facial components of a target face in an image.
E2Style: Improve the Efficiency and Effectiveness of StyleGAN Inversion
This paper studies the problem of StyleGAN inversion, which plays an essential role in enabling the pretrained StyleGAN to be used for real image editing tasks.
Weakly-supervised Caricature Face Parsing through Domain Adaptation
However, current state-of-the-art face parsing methods require large amounts of labeled data on the pixel-level and such process for caricature is tedious and labor-intensive.
Face Parsing with RoI Tanh-Warping
It uses hierarchical local based method for inner facial components and global methods for outer facial components.
End-to-End Face Parsing via Interlinked Convolutional Neural Networks
Face parsing is an important computer vision task that requires accurate pixel segmentation of facial parts (such as eyes, nose, mouth, etc.
Edge-aware Graph Representation Learning and Reasoning for Face Parsing
Specifically, we encode a facial image onto a global graph representation where a collection of pixels ("regions") with similar features are projected to each vertex.
Progressive Semantic-Aware Style Transformation for Blind Face Restoration
Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs.
Learning Spatial Attention for Face Super-Resolution
Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e. g., $16\times16$).
High Resolution Face Editing with Masked GAN Latent Code Optimization
The proposed approach is based on an optimization procedure that directly optimizes the latent code of a pre-trained (state-of-the-art) Generative Adversarial Network (i. e., StyleGAN2) with respect to several constraints that ensure: (i) preservation of relevant image content, (ii) generation of the targeted facial attributes, and (iii) spatially--selective treatment of local image areas.
Fully Transformer Networks for Semantic Image Segmentation
Transformers have shown impressive performance in various natural language processing and computer vision tasks, due to the capability of modeling long-range dependencies.