no code implementations • 16 Jan 2025 • Sumit Chaturvedi, Mengwei Ren, Yannick Hold-Geoffroy, Jingyuan Liu, Julie Dorsey, Zhixin Shu
Our method generalizes to diverse real photographs and produces realistic illumination effects, including specular highlights and cast shadows, while preserving the subject's identity.
no code implementations • 23 Dec 2024 • Fa-Ting Hong, Zhan Xu, Haiyang Liu, Qinjie Lin, Luchuan Song, Zhixin Shu, Yang Zhou, Duygu Ceylan, Dan Xu
Diffusion-based human animation aims to animate a human character based on a source human image as well as driving signals such as a sequence of poses.
no code implementations • 23 Dec 2024 • Weijie Lyu, Yi Zhou, Ming-Hsuan Yang, Zhixin Shu
Our pipeline begins by employing a multi-view latent diffusion model that generates consistent side and back views of the head from a single facial input.
no code implementations • 18 Dec 2024 • Hanwen Jiang, Zexiang Xu, Desai Xie, Ziwen Chen, Haian Jin, Fujun Luan, Zhixin Shu, Kai Zhang, Sai Bi, Xin Sun, Jiuxiang Gu, QiXing Huang, Georgios Pavlakos, Hao Tan
We propose scaling up 3D scene reconstruction by training with synthesized data.
no code implementations • 7 Oct 2024 • Jae Shin Yoon, Zhixin Shu, Mengwei Ren, Xuaner Zhang, Yannick Hold-Geoffroy, Krishna Kumar Singh, He Zhang
For robust and natural shadow removal, we propose to train the diffusion model with a compositional repurposing framework: a pre-trained text-guided image generation model is first fine-tuned to harmonize the lighting and color of the foreground with a background scene by using a background harmonization dataset; and then the model is further fine-tuned to generate a shadow-free portrait image via a shadow-paired dataset.
no code implementations • 29 Sep 2024 • Jingyi Xu, Hieu Le, Zhixin Shu, Yang Wang, Yi-Hsuan Tsai, Dimitris Samaras
The training signals for this predictor are obtained through our emotion-agnostic intensity pseudo-labeling method without the need of frame-wise intensity labeling.
no code implementations • 25 Aug 2024 • Andrew Hou, Zhixin Shu, Xuaner Zhang, He Zhang, Yannick Hold-Geoffroy, Jae Shin Yoon, Xiaoming Liu
Existing portrait relighting methods struggle with precise control over facial shadows, particularly when faced with challenges such as handling hard shadows from directional light sources or adjusting shadows while remaining in harmony with existing lighting conditions.
1 code implementation • 28 Jul 2024 • Chengan He, Xin Sun, Zhixin Shu, Fujun Luan, Sören Pirk, Jorge Alejandro Amador Herrera, Dominik L. Michels, Tuanfeng Y. Wang, Meng Zhang, Holly Rushmeier, Yi Zhou
We present Perm, a learned parametric representation of human 3D hair designed to facilitate various hair-related applications.
1 code implementation • 13 Jun 2024 • Desai Xie, Sai Bi, Zhixin Shu, Kai Zhang, Zexiang Xu, Yi Zhou, Sören Pirk, Arie Kaufman, Xin Sun, Hao Tan
We demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse, can achieve high visual quality in the reconstruction of real-world objects, competitive with models trained on Objaverse.
no code implementations • CVPR 2024 • Yiqun Mei, Yu Zeng, He Zhang, Zhixin Shu, Xuaner Zhang, Sai Bi, Jianming Zhang, HyunJoon Jung, Vishal M. Patel
At the core of portrait photography is the search for ideal lighting and viewpoint.
no code implementations • 6 Feb 2024 • Alfredo Rivero, ShahRukh Athar, Zhixin Shu, Dimitris Samaras
Using a set of control signals, such as head pose and expressions, we transform them to the 3D space with learned deformations to generate the desired rendering.
no code implementations • CVPR 2024 • Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Sören Pirk, Arie E. Kaufman
To this end, we introduce Carve3D, an improved RLFT algorithm coupled with a novel Multi-view Reconstruction Consistency (MRC) metric, to enhance the consistency of multi-view diffusion models.
no code implementations • CVPR 2024 • Mengwei Ren, Wei Xiong, Jae Shin Yoon, Zhixin Shu, Jianming Zhang, HyunJoon Jung, Guido Gerig, He Zhang
Portrait harmonization aims to composite a subject into a new background, adjusting its lighting and color to ensure harmony with the background scene.
no code implementations • 20 Sep 2023 • ShahRukh Athar, Zhixin Shu, Zexiang Xu, Fujun Luan, Sai Bi, Kalyan Sunkavalli, Dimitris Samaras
The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes.
no code implementations • 4 Jul 2023 • Zhen Zhu, Yijun Li, Weijie Lyu, Krishna Kumar Singh, Zhixin Shu, Soeren Pirk, Derek Hoiem
We investigate how to generate multimodal image outputs, such as RGB, depth, and surface normals, with a single generative model.
no code implementations • CVPR 2023 • Yiqun Mei, He Zhang, Xuaner Zhang, Jianming Zhang, Zhixin Shu, Yilin Wang, Zijun Wei, Shi Yan, HyunJoon Jung, Vishal M. Patel
Recent portrait relighting methods have achieved realistic results of portrait lighting effects given a desired lighting representation such as an environment map.
no code implementations • CVPR 2024 • Yiran Xu, Zhixin Shu, Cameron Smith, Seoung Wug Oh, Jia-Bin Huang
3D-aware GANs offer new capabilities for view synthesis while preserving the editing functionalities of their 2D counterparts.
no code implementations • CVPR 2023 • Zhengfei Kuang, Fujun Luan, Sai Bi, Zhixin Shu, Gordon Wetzstein, Kalyan Sunkavalli
Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis.
no code implementations • 24 Aug 2022 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Richard Zhang, S. Y. Kung
While concatenating GAN inversion and a 3D-aware, noise-to-image GAN is a straight-forward solution, it is inefficient and may lead to noticeable drop in editing quality.
no code implementations • CVPR 2022 • ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman, Zhixin Shu
In this work, we propose RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video.
no code implementations • CVPR 2022 • Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park
Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.
1 code implementation • CVPR 2022 • Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann
Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field.
no code implementations • 29 Sep 2021 • ShahRukh Athar, Zhixin Shu, Dimitris Samaras
In this work, we design a system that enables 1) novel view synthesis for portrait video, of both the human subject and the scene they are in and 2) explicit control of the facial expressions through a low-dimensional expression representation.
no code implementations • 29 Sep 2021 • Sagnik Das, Ke Ma, Zhixin Shu, Dimitris Samaras
We also demonstrate the usefulness of our system by applying it to document texture editing.
no code implementations • 13 Sep 2021 • Badour AlBahar, Jingwan Lu, Jimei Yang, Zhixin Shu, Eli Shechtman, Jia-Bin Huang
We present an algorithm for re-rendering a person from a single image under arbitrary poses.
no code implementations • 10 Aug 2021 • ShahRukh Athar, Zhixin Shu, Dimitris Samaras
In this work, we design a system that enables both novel view synthesis for portrait video, including the human subject and the scene background, and explicit control of the facial expressions through a low-dimensional expression representation.
no code implementations • 15 Jul 2021 • Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, Belen Masia, Diego Gutierrez
We present a single-image data-driven method to automatically relight images with full-body humans in them.
1 code implementation • CVPR 2021 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S. Y. Kung
We then propose a novel content-aware method to guide the processes of both pruning and distillation.
no code implementations • 7 Oct 2020 • Jingyi Xu, Zhixin Shu, Dimitris Samaras
However, some testing data are considered "hard" as they lie close to the decision boundaries and are prone to misclassification, leading to performance degradation for ZSL.
no code implementations • 2 Nov 2019 • ShahRukh Athar, Zhixin Shu, Dimitris Samaras
In the "motion-editing" step, we explicitly model facial movement through image deformation, warping the image into the desired expression.
no code implementations • 26 Apr 2019 • Mihir Sahasrabudhe, Zhixin Shu, Edward Bartrum, Riza Alp Guler, Dimitris Samaras, Iasonas Kokkinos
In this work we introduce Lifting Autoencoders, a generative 3D surface-based model of object categories.
no code implementations • 16 Sep 2018 • Huidong Liu, Yang Guo, Na lei, Zhixin Shu, Shing-Tung Yau, Dimitris Samaras, Xianfeng GU
Experimental results on an eight-Gaussian dataset show that the proposed OT can handle multi-cluster distributions.
2 code implementations • ECCV 2018 • Zhixin Shu, Mihir Sahasrabudhe, Alp Guler, Dimitris Samaras, Nikos Paragios, Iasonas Kokkinos
In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner.
Ranked #9 on
Unsupervised Facial Landmark Detection
on MAFL
1 code implementation • CVPR 2018 • Ke Ma, Zhixin Shu, Xue Bai, Jue Wang, Dimitris Samaras
The network is trained on this dataset with various data augmentations to improve its generalization ability.
Ranked #4 on
SSIM
on DocUNet
(using extra training data)
no code implementations • 28 Nov 2017 • Mengjiao Wang, Zhixin Shu, Shiyang Cheng, Yannis Panagakis, Dimitris Samaras, Stefanos Zafeiriou
Several factors contribute to the appearance of an object in a visual scene, including pose, illumination, and deformation, among others.
no code implementations • 8 Sep 2017 • Wuming Zhang, Zhixin Shu, Dimitris Samaras, Liming Chen
Heterogeneous face recognition between color image and depth image is a much desired capacity for real world applications where shape information is looked upon as merely involved in gallery.
2 code implementations • CVPR 2017 • Zhixin Shu, Ersin Yumer, Sunil Hadap, Kalyan Sunkavalli, Eli Shechtman, Dimitris Samaras
Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other --- a process that is tedious, fragile, and computationally intensive.