no code implementations • ECCV 2020 • Zhuo Su, Lan Xu, Zerong Zheng, Tao Yu, Yebin Liu, Lu Fang
To enable robust tracking, we embrace both the initial model and the various visual cues into a novel performance capture scheme with hybrid motion optimization and semantic volumetric fusion, which can successfully capture challenging human motions under the monocular setting without pre-scanned detailed template and owns the reinitialization ability to recover from tracking failures and the disappear-reoccur scenarios.
no code implementations • 31 Mar 2024 • Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang
To adequately utilize the available image evidence in multi-view video-based avatar modeling, we propose TexVocab, a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation.
1 code implementation • 15 Mar 2024 • Ronghui Li, Yuxiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, Xiu Li
In contrast, the second-stage is the local diffusion, which parallelly generates detailed motion sequences under the guidance of the dance primitives and choreographic rules.
Ranked #1 on Motion Synthesis on FineDance
no code implementations • 16 Jan 2024 • Yun Liu, Haolin Yang, Xu Si, Ling Liu, Zipeng Li, Yuxiang Zhang, Yebin Liu, Li Yi
Humans commonly work with multiple objects in daily life and can intuitively transfer manipulation skills to novel objects by understanding object functional regularities.
1 code implementation • 15 Dec 2023 • Jiajun Zhang, Yuxiang Zhang, Hongwen Zhang, Xiao Zhou, Boyao Zhou, Ruizhi Shao, Zonghai Hu, Yebin Liu
To address this, we further propose a complementary training strategy that leverages synthetic data to introduce instance-level shape priors, enabling the disentanglement of occupancy fields for different instances.
no code implementations • 12 Dec 2023 • Yibo Xia, Lizhen Wang, Xiang Deng, Xiaoyan Luo, Yebin Liu
Specifically, we propose a Gaussian Mixture based Expression Generator (GMEG) which can construct a continuous and multi-modal latent space, achieving more flexible emotion manipulation.
no code implementations • 10 Dec 2023 • Yi Wang, Jian Ma, Ruizhi Shao, Qiao Feng, Yu-Kun Lai, Yebin Liu, Kun Li
To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing that can eliminate the non-clothing content generated by the model.
no code implementations • 7 Dec 2023 • Yufan Chen, Lizhen Wang, Qijing Li, Hongjiang Xiao, Shengping Zhang, Hongxun Yao, Yebin Liu
In response to these challenges, we propose MonoGaussianAvatar (Monocular Gaussian Point-based Head Avatar), a novel approach that harnesses 3D Gaussian point representation coupled with a Gaussian deformation field to learn explicit head avatars from monocular portrait videos.
1 code implementation • 5 Dec 2023 • Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups.
1 code implementation • 4 Dec 2023 • Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
no code implementations • 3 Dec 2023 • Xiaochen Zhao, Jingxiang Sun, Lizhen Wang, Yebin Liu
While high fidelity and efficiency are central to the creation of digital head avatars, recent methods relying on 2D or 3D generative models often experience limitations such as shape distortion, expression inaccuracy, and identity flickering.
no code implementations • 29 Nov 2023 • Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Yebin Liu, Kun Li
Then, we regress the motion representation from the audio signal by a translation model employing our contrastive motion learning method.
1 code implementation • 27 Nov 2023 • Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu
Overall, our method can create lifelike avatars with dynamic, realistic and generalized appearances.
1 code implementation • 6 Nov 2023 • Yingzhi Tang, Qijian Zhang, Junhui Hou, Yebin Liu
The latest trends in the research field of single-view human reconstruction devote to learning deep implicit functions constrained by explicit body shape priors.
1 code implementation • 25 Oct 2023 • Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, Yebin Liu
The score distillation from this 3D-aware diffusion prior provides view-consistent guidance for the scene.
no code implementations • 10 Oct 2023 • Minghan Qin, Yifan Liu, Yuelang Xu, Xiaochen Zhao, Yebin Liu, Haoqian Wang
One crucial aspect of 3D head avatar reconstruction lies in the details of facial expressions.
no code implementations • 2 Oct 2023 • Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang
The main idea is to enhance the model's 2D perception of 3D geometry by learning a normal-adapted diffusion model and a normal-aligned diffusion model.
no code implementations • 29 Sep 2023 • Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu
The problem of modeling an animatable 3D human head avatar under light-weight setups is of significant importance but has not been well solved.
no code implementations • ICCV 2023 • Siyou Lin, Boyao Zhou, Zerong Zheng, Hongwen Zhang, Yebin Liu
To achieve wrinkle-level as well as texture-level alignment, we present a novel coarse-to-fine two-stage method that leverages intrinsic manifold properties with two neural deformation fields, in the 3D space and the intrinsic space, respectively.
no code implementations • ICCV 2023 • Zhaoqi Su, Liangxiao Hu, Siyou Lin, Hongwen Zhang, Shengping Zhang, Justus Thies, Yebin Liu
In contrast to previous work on 3D avatar reconstruction, our method is able to generalize to novel poses with realistic dynamic cloth deformations.
no code implementations • 3 Jul 2023 • Yuxiang Zhang, Hongwen Zhang, Liangxiao Hu, Jiajun Zhang, Hongwei Yi, Shengping Zhang, Yebin Liu
For more accurate and physically plausible predictions in world space, our network is designed to learn human motions from a human-centric perspective, which enables the understanding of the same motion captured with different camera trajectories.
Ranked #208 on 3D Human Pose Estimation on Human3.6M
no code implementations • 31 May 2023 • Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu
We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions.
no code implementations • 31 May 2023 • Junxing Hu, Hongwen Zhang, Zerui Chen, Mengcheng Li, Yunlong Wang, Yebin Liu, Zhenan Sun
In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space and leverage diffused contact probabilities to construct the implicit neural representation for the manipulated object.
no code implementations • 8 May 2023 • Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.
no code implementations • 2 May 2023 • Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, GuoJun Qi, Yebin Liu
Existing approaches to animatable NeRF-based head avatars are either built upon face templates or use the expression coefficients of templates as the driving signal.
1 code implementation • 1 May 2023 • Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu
Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.
1 code implementation • 25 Apr 2023 • Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu
To this end, we present PoseVocab, a novel pose encoding method that encourages the network to discover the optimal pose embeddings for learning the dynamic human appearance.
no code implementations • CVPR 2023 • Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu
In this way, the clothing deformations are disentangled such that the pose-dependent wrinkles can be better learned and applied to unseen poses.
no code implementations • ICCV 2023 • Haibiao Xuan, Xiongzheng Li, Jinsong Zhang, Hongwen Zhang, Yebin Liu, Kun Li
Also, we model global and local spatial relationships in a 3D scene and a textual description respectively based on the scene graph, and introduce a partlevel action mechanism to represent interactions as atomic body part states.
no code implementations • 15 Jan 2023 • Kai Jia, Hongwen Zhang, Liang An, Yebin Liu
The key components of a typical regressor lie in the feature extraction of input views and the fusion of multi-view features.
no code implementations • CVPR 2023 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
no code implementations • 23 Nov 2022 • Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, Yebin Liu
AvatarMAV is the first to model both the canonical appearance and the decoupled expression motion by neural voxels for head avatar.
1 code implementation • 21 Nov 2022 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
2 code implementations • CVPR 2023 • Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, Yebin Liu
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
no code implementations • 16 Jul 2022 • Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu
At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network.
1 code implementation • 14 Jul 2022 • Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu
We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing.
1 code implementation • 13 Jul 2022 • Hongwen Zhang, Yating Tian, Yuxiang Zhang, Mengcheng Li, Liang An, Zhenan Sun, Yebin Liu
To address these issues, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression network for well-aligned human mesh recovery and extend it as PyMAF-X for the recovery of expressive full-body models.
Ranked #6 on 3D Human Pose Estimation on AGORA (using extra training data)
no code implementations • 11 Jul 2022 • Chaonan Ji, Tao Yu, Kaiwen Guo, Jingxin Liu, Yebin Liu
For the relighting, we introduce a ray tracing-based per-pixel lighting representation that explicitly models high-frequency shadows and propose a learning-based shading refinement module to restore realistic shadows (including hard cast shadows) from the ray-traced shading maps.
1 code implementation • 5 Jul 2022 • Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu
Then given a monocular RGB video of this subject, our method integrates information from both the image observation and the avatar prior, and accordingly recon-structs high-fidelity 3D textured models with dynamic details regardless of the visibility.
no code implementations • 20 Jun 2022 • Gaochang Wu, Yuemei Zhou, Yebin Liu, Lu Fang, Tianyou Chai
In this paper, we present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
no code implementations • 5 Jun 2022 • Qiao Feng, Yebin Liu, Yu-Kun Lai, Jingyu Yang, Kun Li
Based on FOF, we design the first 30+FPS high-fidelity real-time monocular human reconstruction framework.
1 code implementation • 31 May 2022 • Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, Yebin Liu
Existing 3D-aware facial generation methods face a dilemma in quality versus editability: they either generate editable results in low resolution or high-quality ones with no editing flexibility.
1 code implementation • 20 Apr 2022 • Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, C. Karen Liu, Leonidas J. Guibas
We perform an extensive study of the benefits of leveraging the eye gaze for ego-centric human motion prediction with various state-of-the-art architectures.
no code implementations • 7 Apr 2022 • Yuemei Zhou, Tao Yu, Zerong Zheng, Ying Fu, Yebin Liu
Existing state-of-the-art novel view synthesis methods rely on either fairly accurate 3D geometry estimation or sampling of the entire space for neural volumetric rendering, which limit the overall efficiency.
no code implementations • CVPR 2022 • Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu
These local radiance fields not only leverage the flexibility of implicit representation in shape and appearance modeling, but also factorize cloth deformations into skeleton motions, node residual translations and the dynamic detail variations inside each individual radiance field.
1 code implementation • CVPR 2022 • Lizhen Wang, ZhiYuan Chen, Tao Yu, Chenguang Ma, Liang Li, Yebin Liu
In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc.
1 code implementation • CVPR 2022 • Mengcheng Li, Liang An, Hongwen Zhang, Lianpeng Wu, Feng Chen, Tao Yu, Yebin Liu
To solve occlusion and interaction challenges of two-hand reconstruction, we introduce two novel attention based modules in each upsampling step of the original GCN.
Ranked #4 on 3D Interacting Hand Pose Estimation on InterHand2.6M
3D Interacting Hand Pose Estimation Vocal Bursts Valence Prediction
1 code implementation • 3 Mar 2022 • Yating Tian, Hongwen Zhang, Yebin Liu, LiMin Wang
Since the release of statistical body models, 3D human mesh recovery has been drawing broader attention.
no code implementations • CVPR 2022 • Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li
To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures.
no code implementations • 19 Dec 2021 • Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker
To handle complicated motions (e. g., self-occlusions), we then leverage the encoded information on the UV manifold to construct a 3D volumetric representation based on a dynamic pose-conditioned neural radiance field.
1 code implementation • CVPR 2022 • Jingxiang Sun, Xuan Wang, Yong Zhang, Xiaoyu Li, Qi Zhang, Yebin Liu, Jue Wang
2D GANs can generate high fidelity portraits but with low view consistency.
no code implementations • ICCV 2021 • Yuxiang Zhang, Zhe Li, Liang An, Mengcheng Li, Tao Yu, Yebin Liu
Overall, we propose the first light-weight total capture system and achieves fast, robust and accurate multi-person total motion capture performance.
Ranked #2 on 3D Multi-Person Pose Estimation on Shelf
no code implementations • ICCV 2021 • Ruizhi Shao, Gaochang Wu, Yuemei Zhou, Ying Fu, Yebin Liu
By combining the local transformer with the multiscale structure, the network is able to capture long-short range correspondences efficiently and accurately.
no code implementations • CVPR 2022 • Ruizhi Shao, Hongwen Zhang, He Zhang, Mingjia Chen, YanPei Cao, Tao Yu, Yebin Liu
We introduce DoubleField, a novel framework combining the merits of both surface field and radiance field for high-fidelity human reconstruction and rendering.
no code implementations • CVPR 2021 • Tao Yu, Zerong Zheng, Kaiwen Guo, Pengpeng Liu, Qionghai Dai, Yebin Liu
Human volumetric capture is a long-standing topic in computer vision and computer graphics.
no code implementations • ICCV 2021 • Yang Zheng, Ruizhi Shao, Yuxiang Zhang, Tao Yu, Zerong Zheng, Qionghai Dai, Yebin Liu
We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras.
1 code implementation • 14 Apr 2021 • Gaochang Wu, Yebin Liu, Lu Fang, Tianyou Chai
In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques.
2 code implementations • ICCV 2021 • Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, LiMin Wang, Zhenan Sun
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Ranked #5 on 3D Human Pose Estimation on AGORA (using extra training data)
3D human pose and shape estimation 3D Human Reconstruction +2
no code implementations • CVPR 2021 • Zhe Li, Tao Yu, Zerong Zheng, Kaiwen Guo, Yebin Liu
By contributing a novel reconstruction framework which contains pose-guided keyframe selection and robust implicit surface fusion, our method fully utilizes the advantages of both tracking-based methods and tracking-free inference methods, and finally enables the high-fidelity reconstruction of dynamic surface details even in the invisible regions.
1 code implementation • 24 Mar 2021 • Gaochang Wu, Yebin Liu, Lu Fang, Qionghai Dai, Tianyou Chai
The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling.
1 code implementation • ICCV 2021 • ZHIYANG YU, Yu Zhang, Deyuan Liu, Dongqing Zou, Xijun Chen, Yebin Liu, Jimmy S. Ren
Though trained on low frame-rate videos, our framework outperforms existing models trained with full high frame-rate videos (and events) on both GoPro dataset and a new real event-based dataset.
1 code implementation • 13 Dec 2020 • Kun Li, Jinsong Zhang, Yebin Liu, Yu-Kun Lai, Qionghai Dai
In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer.
no code implementations • CVPR 2021 • Yuemei Zhou, Gaochang Wu, Ying Fu, Kun Li, Yebin Liu
Various combinations of cameras enrich computational photography, among which reference-based superresolution (RefSR) plays a critical role in multiscale imaging systems.
no code implementations • 30 Nov 2020 • Zhaoqi Su, Tao Yu, Yangang Wang, Yebin Liu
In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing.
no code implementations • 30 Nov 2020 • Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.
1 code implementation • CVPR 2021 • Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu
Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power.
1 code implementation • ECCV 2020 • Lizhen Wang, Xiaochen Zhao, Tao Yu, Songtao Wang, Yebin Liu
We propose NormalGAN, a fast adversarial learning-based method to reconstruct the complete and detailed 3D human from a single RGB-D image.
1 code implementation • 8 Jul 2020 • Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai
To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
Ranked #2 on 3D Human Reconstruction on CAPE
1 code implementation • 5 Jul 2020 • Gaochang Wu, Yingqian Wang, Yebin Liu, Lu Fang, Tianyou Chai
In this paper, we propose a spatial-angular attention network to perceive correspondences in the light field non-locally, and reconstruction high angular resolution light field in an end-to-end manner.
no code implementations • 13 Apr 2020 • Zhaoqi Su, Weilin Wan, Tao Yu, Lingjie Liu, Lu Fang, Wenping Wang, Yebin Liu
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning.
no code implementations • CVPR 2020 • Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, Yebin Liu
Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process.
Ranked #27 on Image Deblurring on GoPro (using extra training data)
no code implementations • CVPR 2020 • Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu
In this paper, we propose an efficient method for robust 3D self-portraits using a single RGBD camera.
1 code implementation • CVPR 2020 • Yuxiang Zhang, Liang An, Tao Yu, Xiu Li, Kun Li, Yebin Liu
Our method enables a realtime online motion capture system running at 30fps using 5 cameras on a 5-person scene.
Ranked #8 on 3D Multi-Person Pose Estimation on Shelf
no code implementations • 19 Jun 2019 • Jingyu Yang, Ji Xu, Kun Li, Yu-Kun Lai, Huanjing Yue, Jianzhi Lu, Hao Wu, Yebin Liu
This paper proposes a new method for simultaneous 3D reconstruction and semantic segmentation of indoor scenes.
no code implementations • CVPR 2019 • Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-Moll, Yebin Liu
This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e. g., cloth wrinkles) using a single RGBD camera.
1 code implementation • ICCV 2019 • Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image.
no code implementations • 17 Feb 2019 • Gaochang Wu, Yebin Liu, Lu Fang, Tianyou Chai
We then propose a novel network architecture for the LapEPI structure, termed as LapEPI-net.
no code implementations • 5 Dec 2018 • Xiu Li, Yebin Liu, Hanbyul Joo, Qionghai Dai, Yaser Sheikh
Specifically, we first introduce a novel markerless motion capture method that can take advantage of dense parsing capability provided by the dense pose detector.
3 code implementations • ECCV 2018 • Shi Yan, Chenglei Wu, Lizhen Wang, Feng Xu, Liang An, Kaiwen Guo, Yebin Liu
Consumer depth sensors are more and more popular and come to our daily lives marked by its recent integration in the latest Iphone X.
no code implementations • ECCV 2018 • Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu
We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).
1 code implementation • ECCV 2018 • Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, Lu Fang
The Reference-based Super-resolution (RefSR) super-resolves a low-resolution (LR) image given an external high-resolution (HR) reference image, where the reference image and LR image share similar viewpoint but with significant resolution gap x8.
no code implementations • CVPR 2018 • Xiu Li, Hongdong Li, Hanbyul Joo, Yebin Liu, Yaser Sheikh
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action.
no code implementations • CVPR 2018 • Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu
We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance.
no code implementations • ICCV 2017 • Tao Yu, Kaiwen Guo, Feng Xu, Yuan Dong, Zhaoqi Su, Jianhui Zhao, Jianguo Li, Qionghai Dai, Yebin Liu
To reduce the ambiguities of the non-rigid deformation parameterization on the surface graph nodes, we take advantage of the internal articulated motion prior for human performance and contribute a skeleton-embedded surface fusion (SSF) method.
3 code implementations • ICCV 2017 • Mengqi Ji, Juergen Gall, Haitian Zheng, Yebin Liu, Lu Fang
It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model.
no code implementations • CVPR 2017 • Gaochang Wu, Mandan Zhao, Liangyong Wang, Qionghai Dai, Tianyou Chai, Yebin Liu
In this paper, we take advantage of the clear texture structure of the epipolar plane image (EPI) in the light field data and model the problem of light field reconstruction from a sparse set of views as a CNN-based angular detail restoration on EPI.
no code implementations • CVPR 2017 • Hang Yan, Yebin Liu, Yasutaka Furukawa
Our approach first warps an input video into the viewpoint of a reference camera.
no code implementations • 29 Oct 2016 • Lan Xu, Lu Fang, Wei Cheng, Kaiwen Guo, Guyue Zhou, Qionghai Dai, Yebin Liu
We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera.
no code implementations • ICCV 2015 • Kaiwen Guo, Feng Xu, Yangang Wang, Yebin Liu, Qionghai Dai
We present a new motion tracking method to robustly reconstruct non-rigid geometries and motions from single view depth inputs captured by a consumer depth sensor.
no code implementations • 22 Nov 2015 • Haitian Zheng, Yebin Liu, Mengqi Ji, Feng Wu, Lu Fang
Finally, the optimization problem enables us to take advantage of state-of-the-art fully convolutional network structure for the implementation of the above encoders and decoder.
no code implementations • CVPR 2015 • Zhoutong Zhang, Yebin Liu, Qionghai Dai
We first introduce a disparity assisted phase based synthesis (DAPS) strategy that can integrate disparity infor- mation into the phase term of a reference image to warp it to its close neighbor views.
no code implementations • CVPR 2014 • Jingyu Lin, Yebin Liu, Matthias B. Hullin, Qionghai Dai
A transient image is the optical impulse response of a scene which visualizes light propagation during an ultra-short time interval.