Search Results for author: Junshu Tang

Found 6 papers, 4 papers with code

3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation

1 code implementation12 Sep 2022 Junshu Tang, Bo Zhang, Binxin Yang, Ting Zhang, Dong Chen, Lizhuang Ma, Fang Wen

In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs.

3D Face Animation Disentanglement +3

Prototype-Aware Heterogeneous Task for Point Cloud Completion

no code implementations5 Sep 2022 Junshu Tang, Jiachen Xu, Jingyu Gong, Haichuan Song, Yuan Xie, Lizhuang Ma

Moreover, for effective training, we consider difficulty-based sampling strategy to encourage the network to pay more attention to some partial point clouds with fewer geometric information.

Point Cloud Completion

LAKe-Net: Topology-Aware Point Cloud Completion by Localizing Aligned Keypoints

1 code implementation CVPR 2022 Junshu Tang, Zhijun Gong, Ran Yi, Yuan Xie, Lizhuang Ma

An asymmetric keypoint locator, including an unsupervised multi-scale keypoint detector and a complete keypoint generator, is proposed for localizing aligned keypoints from complete and partial point clouds.

Point Cloud Completion

Fine-Grained Expression Manipulation via Structured Latent Space

1 code implementation21 Apr 2020 Junshu Tang, Zhiwen Shao, Lizhuang Ma

Most existing expression manipulation methods resort to discrete expression labels, which mainly edit global expressions and ignore the manipulation of fine details.

Explicit Facial Expression Transfer via Fine-Grained Representations

no code implementations6 Sep 2019 Zhiwen Shao, Hengliang Zhu, Junshu Tang, Xuequan Lu, Lizhuang Ma

Instead of using an intermediate estimated guidance, we propose to explicitly transfer facial expression by directly mapping two unpaired input images to two synthesized images with swapped expressions.

Multi-class Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.