Search Results for author: Bingchuan Li

Found 6 papers, 3 papers with code

GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images

no code implementations ICCV 2023 Tianxiang Ma, Bingchuan Li, Qian He, Jing Dong, Tieniu Tan

In this paper, we introduce a novel Geometry-aware Facial Expression Translation (GaFET) framework, which is based on parametric 3D facial representations and can stably decoupled expression.

Facial Expression Translation

CFFT-GAN: Cross-domain Feature Fusion Transformer for Exemplar-based Image Translation

no code implementations3 Feb 2023 Tianxiang Ma, Bingchuan Li, Wei Liu, Miao Hua, Jing Dong, Tieniu Tan

In this paper, we propose a more general learning approach by considering two domain features as a whole and learning both inter-domain correspondence and intra-domain potential information interactions.

Translation

Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field

1 code implementation3 Feb 2023 Tianxiang Ma, Bingchuan Li, Qian He, Jing Dong, Tieniu Tan

CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.

ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing

no code implementations31 Jan 2023 Bingchuan Li, Tianxiang Ma, Peng Zhang, Miao Hua, Wei Liu, Qian He, Zili Yi

Specifically, in Phase I, a W-space-oriented StyleGAN inversion network is trained and used to perform image inversion and editing, which assures the editability but sacrifices reconstruction quality.

Image Generation

FaceEraser: Removing Facial Parts for Augmented Reality

1 code implementation22 Sep 2021 Miao Hua, Lijie Liu, Ziyang Cheng, Qian He, Bingchuan Li, Zili Yi

Whereas, this technique does not satisfy the requirements of facial parts removal, as it is hard to obtain ``ground-truth'' images with real ``blank'' faces.

Image Inpainting

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

1 code implementation22 Sep 2021 Bingchuan Li, Shaofei Cai, Wei Liu, Peng Zhang, Qian He, Miao Hua, Zili Yi

To address these limitations, we design a Dynamic Style Manipulation Network (DyStyle) whose structure and parameters vary by input samples, to perform nonlinear and adaptive manipulation of latent codes for flexible and precise attribute control.

Attribute Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.