Search Results for author: Mingming He

Found 13 papers, 8 papers with code

DenseGAP: Graph-Structured Dense Correspondence Learning with Anchor Points

no code implementations13 Dec 2021 Zhengfei Kuang, Jiaman Li, Mingming He, Tong Wang, Yajie Zhao

To make the local features aware of the global context and improve their matching accuracy, we introduce DenseGAP, a new solution for efficient Dense correspondence learning with a Graph-structured neural network conditioned on Anchor Points.

CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields

1 code implementation9 Dec 2021 Can Wang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao

Furthermore, we propose an inverse optimization method that accurately projects an input image to the latent codes for manipulation to enable editing on real images.

Novel View Synthesis

DisUnknown: Distilling Unknown Factors for Disentanglement Learning

1 code implementation ICCV 2021 Sitao Xiang, Yuming Gu, Pengda Xiang, Menglei Chai, Hao Li, Yajie Zhao, Mingming He

In this paper, we adopt a general setting where all factors that are hard to label or identify are encapsulated as a single unknown factor.


Exemplar-Based 3D Portrait Stylization

no code implementations29 Apr 2021 Fangzhou Han, Shuquan Ye, Mingming He, Menglei Chai, Jing Liao

In the second texture style transfer stage, we focus on performing style transfer on the canonical texture by adopting a differentiable renderer to optimize the texture in a multi-view framework.

Style Transfer

Cross-Domain and Disentangled Face Manipulation with 3D Guidance

1 code implementation22 Apr 2021 Can Wang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao

Face image manipulation via three-dimensional guidance has been widely applied in various interactive scenarios due to its semantically-meaningful understanding and user-friendly controllability.

Domain Adaptation Image Manipulation

Efficient Semantic Image Synthesis via Class-Adaptive Normalization

1 code implementation8 Dec 2020 Zhentao Tan, Dongdong Chen, Qi Chu, Menglei Chai, Jing Liao, Mingming He, Lu Yuan, Gang Hua, Nenghai Yu

Spatially-adaptive normalization (SPADE) is remarkably successful recently in conditional semantic image synthesis \cite{park2019semantic}, which modulates the normalized activation with spatially-varying transformations learned from semantic layouts, to prevent the semantic information from being washed away.

Image Generation

Dynamic Facial Asset and Rig Generation from a Single Scan

no code implementations1 Oct 2020 Jiaman Li, Zheng-Fei Kuang, Yajie Zhao, Mingming He, Karl Bladin, Hao Li

We also model the joint distribution between identities and expressions, enabling the inference of the full set of personalized blendshapes with dynamic appearances from a single neutral input scan.

One-Shot Identity-Preserving Portrait Reenactment

no code implementations26 Apr 2020 Sitao Xiang, Yuming Gu, Pengda Xiang, Mingming He, Koki Nagano, Haiwei Chen, Hao Li

This is achieved by a novel landmark disentanglement network (LD-Net), which predicts personalized facial landmarks that combine the identity of the target with expressions and poses from a different subject.


Rethinking Spatially-Adaptive Normalization

no code implementations6 Apr 2020 Zhentao Tan, Dongdong Chen, Qi Chu, Menglei Chai, Jing Liao, Mingming He, Lu Yuan, Nenghai Yu

Despite its impressive performance, a more thorough understanding of the true advantages inside the box is still highly demanded, to help reduce the significant computation and parameter overheads introduced by these new structures.

Image Generation

Deep Exemplar-based Colorization

1 code implementation17 Jul 2018 Mingming He, Dong-Dong Chen, Jing Liao, Pedro V. Sander, Lu Yuan

More importantly, as opposed to other learning-based colorization methods, our network allows the user to achieve customizable results by simply feeding different references.

Colorization Image Retrieval

Progressive Color Transfer with Dense Semantic Correspondences

2 code implementations2 Oct 2017 Mingming He, Jing Liao, Dong-Dong Chen, Lu Yuan, Pedro V. Sander

The proposed method can be successfully extended from one-to-one to one-to-many color transfer.

Cannot find the paper you are looking for? You can Submit a new open access paper.