Search Results for author: Zhan Xu

Found 13 papers, 6 papers with code

NECA: Neural Customizable Human Avatar

1 code implementation15 Mar 2024 Junjin Xiao, Qing Zhang, Zhan Xu, Wei-Shi Zheng

The core of our approach is to represent humans in complementary dual spaces and predict disentangled neural fields of geometry, albedo, shadow, as well as an external lighting, from which we are able to derive realistic rendering with high-frequency details via volumetric rendering.

Template-Free Single-View 3D Human Digitalization with Diffusion-Guided LRM

no code implementations22 Jan 2024 Zhenzhen Weng, Jingyuan Liu, Hao Tan, Zhan Xu, Yang Zhou, Serena Yeung-Levy, Jimei Yang

We present Human-LRM, a diffusion-guided feed-forward model that predicts the implicit field of a human from a single image.

ActAnywhere: Subject-Aware Video Background Generation

no code implementations19 Jan 2024 Boxiao Pan, Zhan Xu, Chun-Hao Paul Huang, Krishna Kumar Singh, Yang Zhou, Leonidas J. Guibas, Jimei Yang

Generating video background that tailors to foreground subject motion is an important problem for the movie industry and visual effects community.

Exploration of Adolescent Depression Risk Prediction Based on Census Surveys and General Life Issues

no code implementations6 Jan 2024 Qiang Li, Yufeng Wu, Zhan Xu, Hefeng Zhou

We introduced a method for managing severely imbalanced high-dimensional data and an adaptive predictive approach tailored to data structure characteristics.

Facial Expression Recognition

Morig: Motion-aware rigging of character meshes from point clouds

1 code implementation17 Oct 2022 Zhan Xu, Yang Zhou, Li Yi, Evangelos Kalogerakis

We present MoRig, a method that automatically rigs character meshes driven by single-view point cloud streams capturing the motion of performing characters.

Spatial-Temporal Residual Aggregation for High Resolution Video Inpainting

no code implementations5 Nov 2021 Vishnu Sanjay Ramiya Srinivasan, Rui Ma, Qiang Tang, Zili Yi, Zhan Xu

Recent learning-based inpainting algorithms have achieved compelling results for completing missing regions after removing undesired objects in videos.

Video Inpainting Vocal Bursts Intensity Prediction

International Workshop on Continual Semi-Supervised Learning: Introduction, Benchmarks and Baselines

no code implementations27 Oct 2021 Ajmal Shahbaz, Salman Khan, Mohammad Asiful Hossain, Vincenzo Lomonaco, Kevin Cannons, Zhan Xu, Fabio Cuzzolin

The aim of this paper is to formalize a new continual semi-supervised learning (CSSL) paradigm, proposed to the attention of the machine learning community via the IJCAI 2021 International Workshop on Continual Semi-Supervised Learning (CSSL-IJCAI), with the aim of raising field awareness about this problem and mobilizing its effort in this direction.

Activity Recognition Crowd Counting

Animating Through Warping: an Efficient Method for High-Quality Facial Expression Animation

no code implementations1 Aug 2020 Zili Yi, Qiang Tang, Vishnu Sanjay Ramiya Srinivasan, Zhan Xu

It only requires the generator to be trained on small images and can do inference on an image of any size.

4k

Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting

6 code implementations CVPR 2020 Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, Zhan Xu

Since convolutional layers of the neural network only need to operate on low-resolution inputs and outputs, the cost of memory and computing power is thus well suppressed.

2k 8k +2

RigNet: Neural Rigging for Articulated Characters

1 code implementation1 May 2020 Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Chris Landreth, Karan Singh

We present RigNet, an end-to-end automated method for producing animation rigs from input character models.

Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets

1 code implementation22 Aug 2019 Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Karan Singh

We present a learning method for predicting animation skeletons for input 3D models of articulated characters.

VisemeNet: Audio-Driven Animator-Centric Speech Animation

no code implementations24 May 2018 Yang Zhou, Zhan Xu, Chris Landreth, Evangelos Kalogerakis, Subhransu Maji, Karan Singh

We present a novel deep-learning based approach to producing animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio.

Graphics

Cannot find the paper you are looking for? You can Submit a new open access paper.