Search Results for author: Guoxian Song

Found 22 papers, 10 papers with code

Conditional Adversarial Synthesis of 3D Facial Action Units

no code implementations21 Feb 2018 Zhilei Liu, Guoxian Song, Jianfei Cai, Tat-Jen Cham, Juyong Zhang

Employing deep learning-based approaches for fine-grained facial expression analysis, such as those involving the estimation of Action Unit (AU) intensities, is difficult due to the lack of a large-scale dataset of real faces with sufficiently diverse AU labels for training.

Data Augmentation Image Generation

Real-time 3D Face-Eye Performance Capture of a Person Wearing VR Headset

no code implementations21 Jan 2019 Guoxian Song, Jianfei Cai, Tat-Jen Cham, Jianmin Zheng, Juyong Zhang, Henry Fuchs

Teleconference or telepresence based on virtual reality (VR) headmount display (HMD) device is a very interesting and promising application since HMD can provide immersive feelings for users.

Recovering Facial Reflectance and Geometry from Multi-view Images

no code implementations27 Nov 2019 Guoxian Song, Jianmin Zheng, Jianfei Cai, Tat-Jen Cham

While the problem of estimating shapes and diffuse reflectances of human faces from images has been extensively studied, there is relatively less work done on recovering the specular albedo.

Face Model

GeoConv: Geodesic Guided Convolution for Facial Action Unit Recognition

no code implementations6 Mar 2020 Yuedong Chen, Guoxian Song, Zhiwen Shao, Jianfei Cai, Tat-Jen Cham, Jianming Zheng

Automatic facial action unit (AU) recognition has attracted great attention but still remains a challenging task, as subtle changes of local facial muscles are difficult to thoroughly capture.

Face Model Facial Action Unit Detection

Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition

1 code implementation12 Apr 2021 Chuanxia Zheng, Duy-Son Dao, Guoxian Song, Tat-Jen Cham, Jianfei Cai

In this work, we propose a higher-level scene understanding system to tackle both visible and invisible parts of objects and backgrounds in a given scene.

Instance Segmentation Scene Understanding +1

AgileGAN: stylizing portraits by inversion-consistent transfer learning

1 code implementation ACM Transactions on Graphics 2021 Guoxian Song, Linjie Luo, Jing Liu, Wan-Chun Ma, Chun-Pong Lai, Chuanxia Zheng, Tat-Jen Cham

While substantial progress has been made in automated stylization, generating high quality stylistic portraits is still a challenge, and even the recent popular Toonify suffers from several artifacts when used on real input images.

Attribute motion retargeting +1

High-Quality Pluralistic Image Completion via Code Shared VQGAN

no code implementations5 Apr 2022 Chuanxia Zheng, Guoxian Song, Tat-Jen Cham, Jianfei Cai, Dinh Phung, Linjie Luo

In this work, we present a novel framework for pluralistic image completion that can achieve both high quality and diversity at much faster inference speed.

Image Reconstruction Vocal Bursts Intensity Prediction

Using Augmented Face Images to Improve Facial Recognition Tasks

no code implementations13 May 2022 Shuo Cheng, Guoxian Song, Wan-Chun Ma, Chao Wang, Linjie Luo

We present a framework that uses GAN-augmented images to complement certain specific attributes, usually underrepresented, for machine learning model training.

BIG-bench Machine Learning

AvatarGen: a 3D Generative Model for Animatable Human Avatars

1 code implementation1 Aug 2022 Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, Jiashi Feng

Unsupervised generation of clothed virtual humans with various appearance and animatable poses is important for creating 3D human avatars and other AR/VR applications.

3D Human Reconstruction

AgileAvatar: Stylized 3D Avatar Creation via Cascaded Domain Bridging

no code implementations15 Nov 2022 Shen Sang, Tiancheng Zhi, Guoxian Song, Minghao Liu, Chunpong Lai, Jing Liu, Xiang Wen, James Davis, Linjie Luo

We propose a novel self-supervised learning framework to create high-quality stylized 3D avatars with a mix of continuous and discrete parameters.

Self-Supervised Learning

AvatarGen: A 3D Generative Model for Animatable Human Avatars

1 code implementation26 Nov 2022 Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, Jiashi Feng

Specifically, we decompose the generative 3D human synthesis into pose-guided mapping and canonical representation with predefined human pose and shape, such that the canonical representation can be explicitly driven to different poses and shapes with the guidance of a 3D parametric human model SMPL.

PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360deg

1 code implementation CVPR 2023 Sizhe An, Hongyi Xu, Yichun Shi, Guoxian Song, Umit Y. Ogras, Linjie Luo

We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in 360deg with diverse appearance and detailed geometry using only in-the-wild unstructured images for training.

Image Generation Image Segmentation +1

PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360$^{\circ}$

1 code implementation23 Mar 2023 Sizhe An, Hongyi Xu, Yichun Shi, Guoxian Song, Umit Ogras, Linjie Luo

We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in $360^\circ$ with diverse appearance and detailed geometry using only in-the-wild unstructured images for training.

Image Generation Image Segmentation +1

AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer Learning

no code implementations24 Mar 2023 Guoxian Song, Hongyi Xu, Jing Liu, Tiancheng Zhi, Yichun Shi, Jianfeng Zhang, Zihang Jiang, Jiashi Feng, Shen Sang, Linjie Luo

Capitalizing on the recent advancement of 3D-aware GAN models, we perform \emph{guided transfer learning} on a pretrained 3D GAN generator to produce multi-view-consistent stylized renderings.

Transfer Learning

OmniAvatar: Geometry-Guided Controllable 3D Head Synthesis

no code implementations CVPR 2023 Hongyi Xu, Guoxian Song, Zihang Jiang, Jianfeng Zhang, Yichun Shi, Jing Liu, WanChun Ma, Jiashi Feng, Linjie Luo

We present OmniAvatar, a novel geometry-guided 3D head synthesis model trained from in-the-wild unstructured images that is capable of synthesizing diverse identity-preserved 3D heads with compelling dynamic details under full disentangled control over camera poses, facial expressions, head shapes, articulated neck and jaw poses.

FG-Net: Facial Action Unit Detection with Generalizable Pyramidal Features

1 code implementation23 Aug 2023 Yufeng Yin, Di Chang, Guoxian Song, Shen Sang, Tiancheng Zhi, Jing Liu, Linjie Luo, Mohammad Soleymani

The proposed FG-Net achieves a strong generalization ability for heatmap-based AU detection thanks to the generalizable and semantic-rich features extracted from the pre-trained generative model.

Action Unit Detection Cross-corpus +1

GETAvatar: Generative Textured Meshes for Animatable Human Avatars

no code implementations ICCV 2023 Xuanmeng Zhang, Jianfeng Zhang, Rohan Chacko, Hongyi Xu, Guoxian Song, Yi Yang, Jiashi Feng

We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality textures and geometries.

Image Generation

DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis

1 code implementation20 Dec 2023 Yuming Gu, You Xie, Hongyi Xu, Guoxian Song, Yichun Shi, Di Chang, Jing Yang, Linjie Luo

The rendering view is then manipulated with a novel conditional control module that interprets the camera pose by watching a condition image of a crossed subject from the same view.

Denoising

X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention

no code implementations23 Mar 2024 You Xie, Hongyi Xu, Guoxian Song, Chao Wang, Yichun Shi, Linjie Luo

We propose X-Portrait, an innovative conditional diffusion model tailored for generating expressive and temporally coherent portrait animation.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.