Search Results for author: Xinya Chen

Found 6 papers, 2 papers with code

Learning 3D-Aware GANs from Unposed Images with Template Feature Field

no code implementations8 Apr 2024 Xinya Chen, Hanlei Guo, Yanrui Bin, Shangzhan Zhang, Yuanbo Yang, Yue Wang, Yujun Shen, Yiyi Liao

Collecting accurate camera poses of training images has been shown to well serve the learning of 3D-aware generative adversarial networks (GANs) yet can be quite expensive in practice.

Pose Estimation

Relevant Region Prediction for Crowd Counting

no code implementations20 May 2020 Xinya Chen, Yanrui Bin, Changxin Gao, Nong Sang, Hao Tang

The module builds a fully connected directed graph between the regions of different density where each node (region) is represented by weighted global pooled feature, and GCN is learned to map this region graph to a set of relation-aware regions representations.

Crowd Counting Relation

Expression Conditional GAN for Facial Expression-to-Expression Translation

no code implementations14 May 2019 Hao Tang, Wei Wang, Songsong Wu, Xinya Chen, Dan Xu, Nicu Sebe, Yan Yan

In this paper, we focus on the facial expression translation task and propose a novel Expression Conditional GAN (ECGAN) which can learn the mapping from one image domain to another one based on an additional expression attribute.

Attribute Facial expression generation +2

Attribute-Guided Sketch Generation

1 code implementation28 Jan 2019 Hao Tang, Xinya Chen, Wei Wang, Dan Xu, Jason J. Corso, Nicu Sebe, Yan Yan

To this end, we propose a novel Attribute-Guided Sketch Generative Adversarial Network (ASGAN) which is an end-to-end framework and contains two pairs of generators and discriminators, one of which is used to generate faces with attributes while the other one is employed for image-to-sketch translation.

Attribute Generative Adversarial Network +1

Cannot find the paper you are looking for? You can Submit a new open access paper.