Search Results for author: Zhuoqian Yang

Found 5 papers, 2 papers with code

3DHumanGAN: Towards Photo-Realistic 3D-Aware Human Image Generation

1 code implementation14 Dec 2022 Zhuoqian Yang, Shikai Li, Wayne Wu, Bo Dai

We present 3DHumanGAN, a 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different view-angles and body-poses.

Image Generation

MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks

no code implementations19 Dec 2021 Wentao Zhu, Zhuoqian Yang, Ziang Di, Wayne Wu, Yizhou Wang, Chen Change Loy

Trained with the canonicalization operations and the derived regularizations, our method learns to factorize a skeleton sequence into three independent semantic subspaces, i. e., motion, structure, and view angle.

3D Reconstruction Action Analysis +2

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

no code implementations CVPR 2020 Zhuoqian Yang, Wentao Zhu, Wayne Wu, Chen Qian, Qiang Zhou, Bolei Zhou, Chen Change Loy

We present a lightweight video motion retargeting approach TransMoMo that is capable of transferring motion of a person in a source video realistically to another video of a target person.

motion retargeting

Scene Graph Reasoning with Prior Visual Relationship for Visual Question Answering

no code implementations23 Dec 2018 Zhuoqian Yang, Zengchang Qin, Jing Yu, Yue Hu

Upon the constructed graph, we propose a Scene Graph Convolutional Network (SceneGCN) to jointly reason the object properties and relational semantics for the correct answer.

Cross-Modal Information Retrieval Information Retrieval +3

Textual Relationship Modeling for Cross-Modal Information Retrieval

1 code implementation31 Oct 2018 Jing Yu, Chenghao Yang, Zengchang Qin, Zhuoqian Yang, Yue Hu, Yanbing Liu

A joint neural model is proposed to learn feature representation individually in each modality.


Cannot find the paper you are looking for? You can Submit a new open access paper.