no code implementations • 13 Feb 2025 • Isabella Liu, Zhan Xu, Wang Yifan, Hao Tan, Zexiang Xu, Xiaolong Wang, Hao Su, Zifan Shi
To achieve this, we organize the joints in a breadth-first search (BFS) order, enabling the skeleton to be defined as a sequence of 3D locations and the parent index.
no code implementations • 23 Dec 2024 • Fa-Ting Hong, Zhan Xu, Haiyang Liu, Qinjie Lin, Luchuan Song, Zhixin Shu, Yang Zhou, Duygu Ceylan, Dan Xu
Diffusion-based human animation aims to animate a human character based on a source human image as well as driving signals such as a sequence of poses.
no code implementations • 17 Dec 2024 • Hsin-Ping Huang, Yang Zhou, Jui-Hsien Wang, Difan Liu, Feng Liu, Ming-Hsuan Yang, Zhan Xu
Generating realistic human videos remains a challenging task, with the most effective methods currently relying on a human motion sequence as a control signal.
1 code implementation • 10 Oct 2024 • Desai Xie, Zhan Xu, Yicong Hong, Hao Tan, Difan Liu, Feng Liu, Arie Kaufman, Yang Zhou
Current frontier video diffusion models have demonstrated remarkable results at generating high-quality videos.
1 code implementation • CVPR 2024 • Junjin Xiao, Qing Zhang, Zhan Xu, Wei-Shi Zheng
The core of our approach is to represent humans in complementary dual spaces and predict disentangled neural fields of geometry, albedo, shadow, as well as an external lighting, from which we are able to derive realistic rendering with high-frequency details via volumetric rendering.
no code implementations • 22 Jan 2024 • Zhenzhen Weng, Jingyuan Liu, Hao Tan, Zhan Xu, Yang Zhou, Serena Yeung-Levy, Jimei Yang
We present Human-LRM, a diffusion-guided feed-forward model that predicts the implicit field of a human from a single image.
no code implementations • 19 Jan 2024 • Boxiao Pan, Zhan Xu, Chun-Hao Paul Huang, Krishna Kumar Singh, Yang Zhou, Leonidas J. Guibas, Jimei Yang
Generating video background that tailors to foreground subject motion is an important problem for the movie industry and visual effects community.
no code implementations • 6 Jan 2024 • Qiang Li, Yufeng Wu, Zhan Xu, Hefeng Zhou
We introduced a method for managing severely imbalanced high-dimensional data and an adaptive predictive approach tailored to data structure characteristics.
1 code implementation • 17 Oct 2022 • Zhan Xu, Yang Zhou, Li Yi, Evangelos Kalogerakis
We present MoRig, a method that automatically rigs character meshes driven by single-view point cloud streams capturing the motion of performing characters.
1 code implementation • CVPR 2022 • Zhan Xu, Matthew Fisher, Yang Zhou, Deepali Aneja, Rushikesh Dudhat, Li Yi, Evangelos Kalogerakis
Rigged puppets are one of the most prevalent representations to create 2D character animations.
no code implementations • 5 Nov 2021 • Vishnu Sanjay Ramiya Srinivasan, Rui Ma, Qiang Tang, Zili Yi, Zhan Xu
Recent learning-based inpainting algorithms have achieved compelling results for completing missing regions after removing undesired objects in videos.
no code implementations • 27 Oct 2021 • Ajmal Shahbaz, Salman Khan, Mohammad Asiful Hossain, Vincenzo Lomonaco, Kevin Cannons, Zhan Xu, Fabio Cuzzolin
The aim of this paper is to formalize a new continual semi-supervised learning (CSSL) paradigm, proposed to the attention of the machine learning community via the IJCAI 2021 International Workshop on Continual Semi-Supervised Learning (CSSL-IJCAI), with the aim of raising field awareness about this problem and mobilizing its effort in this direction.
no code implementations • 1 Aug 2020 • Zili Yi, Qiang Tang, Vishnu Sanjay Ramiya Srinivasan, Zhan Xu
It only requires the generator to be trained on small images and can do inference on an image of any size.
6 code implementations • CVPR 2020 • Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, Zhan Xu
Since convolutional layers of the neural network only need to operate on low-resolution inputs and outputs, the cost of memory and computing power is thus well suppressed.
Ranked #6 on
Image Inpainting
on Places2 val
1 code implementation • 1 May 2020 • Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Chris Landreth, Karan Singh
We present RigNet, an end-to-end automated method for producing animation rigs from input character models.
1 code implementation • 22 Aug 2019 • Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Karan Singh
We present a learning method for predicting animation skeletons for input 3D models of articulated characters.
no code implementations • 24 May 2018 • Yang Zhou, Zhan Xu, Chris Landreth, Evangelos Kalogerakis, Subhransu Maji, Karan Singh
We present a novel deep-learning based approach to producing animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio.
Graphics