Search Results for author: Yebin Liu

Found 92 papers, 34 papers with code

RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera

no code implementations ECCV 2020 Zhuo Su, Lan Xu, Zerong Zheng, Tao Yu, Yebin Liu, Lu Fang

To enable robust tracking, we embrace both the initial model and the various visual cues into a novel performance capture scheme with hybrid motion optimization and semantic volumetric fusion, which can successfully capture challenging human motions under the monocular setting without pre-scanned detailed template and owns the reinitialization ability to recover from tracking failures and the disappear-reoccur scenarios.

4D reconstruction

TexVocab: Texture Vocabulary-conditioned Human Avatars

no code implementations31 Mar 2024 Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang

To adequately utilize the available image evidence in multi-view video-based avatar modeling, we propose TexVocab, a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation.

Human Dynamics

Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives

1 code implementation15 Mar 2024 Ronghui Li, Yuxiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, Xiu Li

In contrast, the second-stage is the local diffusion, which parallelly generates detailed motion sequences under the guidance of the dance primitives and choreographic rules.

Motion Synthesis

TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding

no code implementations16 Jan 2024 Yun Liu, Haolin Yang, Xu Si, Ling Liu, Zipeng Li, Yuxiang Zhang, Yebin Liu, Li Yi

Humans commonly work with multiple objects in daily life and can intuitively transfer manipulation skills to novel objects by understanding object functional regularities.

Action Recognition Benchmarking +2

Ins-HOI: Instance Aware Human-Object Interactions Recovery

1 code implementation15 Dec 2023 Jiajun Zhang, Yuxiang Zhang, Hongwen Zhang, Xiao Zhou, Boyao Zhou, Ruizhi Shao, Zonghai Hu, Yebin Liu

To address this, we further propose a complementary training strategy that leverages synthetic data to introduce instance-level shape priors, enabling the disentanglement of occupancy fields for different instances.

Descriptive Disentanglement +3

GMTalker: Gaussian Mixture based Emotional talking video Portraits

no code implementations12 Dec 2023 Yibo Xia, Lizhen Wang, Xiang Deng, Xiaoyan Luo, Yebin Liu

Specifically, we propose a Gaussian Mixture based Expression Generator (GMEG) which can construct a continuous and multi-modal latent space, achieving more flexible emotion manipulation.

Layered 3D Human Generation via Semantic-Aware Diffusion Model

no code implementations10 Dec 2023 Yi Wang, Jian Ma, Ruizhi Shao, Qiao Feng, Yu-Kun Lai, Yebin Liu, Kun Li

To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing that can eliminate the non-clothing content generated by the model.

MonoGaussianAvatar: Monocular Gaussian Point-based Head Avatar

no code implementations7 Dec 2023 Yufan Chen, Lizhen Wang, Qijing Li, Hongjiang Xiao, Shengping Zhang, Hongxun Yao, Yebin Liu

In response to these challenges, we propose MonoGaussianAvatar (Monocular Gaussian Point-based Head Avatar), a novel approach that harnesses 3D Gaussian point representation coupled with a Gaussian deformation field to learn explicit head avatars from monocular portrait videos.

Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians

1 code implementation5 Dec 2023 Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu

Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups.

2k

InvertAvatar: Incremental GAN Inversion for Generalized Head Avatars

no code implementations3 Dec 2023 Xiaochen Zhao, Jingxiang Sun, Lizhen Wang, Yebin Liu

While high fidelity and efficiency are central to the creation of digital head avatars, recent methods relying on 2D or 3D generative models often experience limitations such as shape distortion, expression inaccuracy, and identity flickering.

Image-to-Image Translation

SpeechAct: Towards Generating Whole-body Motion from Speech

no code implementations29 Nov 2023 Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Yebin Liu, Kun Li

Then, we regress the motion representation from the audio signal by a translation model employing our contrastive motion learning method.

Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling

1 code implementation27 Nov 2023 Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu

Overall, our method can create lifelike avatars with dynamic, realistic and generalized appearances.

Human as Points: Explicit Point-based 3D Human Reconstruction from Single-view RGB Images

1 code implementation6 Nov 2023 Yingzhi Tang, Qijian Zhang, Junhui Hou, Yebin Liu

The latest trends in the research field of single-view human reconstruction devote to learning deep implicit functions constrained by explicit body shape priors.

3D Human Reconstruction

DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior

1 code implementation25 Oct 2023 Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, Yebin Liu

The score distillation from this 3D-aware diffusion prior provides view-consistent guidance for the scene.

3D Generation

HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation

no code implementations2 Oct 2023 Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang

The main idea is to enhance the model's 2D perception of 3D geometry by learning a normal-adapted diffusion model and a normal-aligned diffusion model.

Text to 3D Texture Synthesis

HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field

no code implementations29 Sep 2023 Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu

The problem of modeling an animatable 3D human head avatar under light-weight setups is of significant importance but has not been well solved.

Image-to-Image Translation

Leveraging Intrinsic Properties for Non-Rigid Garment Alignment

no code implementations ICCV 2023 Siyou Lin, Boyao Zhou, Zerong Zheng, Hongwen Zhang, Yebin Liu

To achieve wrinkle-level as well as texture-level alignment, we present a novel coarse-to-fine two-stage method that leverages intrinsic manifold properties with two neural deformation fields, in the 3D space and the intrinsic space, respectively.

CaPhy: Capturing Physical Properties for Animatable Human Avatars

no code implementations ICCV 2023 Zhaoqi Su, Liangxiao Hu, Siyou Lin, Hongwen Zhang, Shengping Zhang, Justus Thies, Yebin Liu

In contrast to previous work on 3D avatar reconstruction, our method is able to generalize to novel poses with realistic dynamic cloth deformations.

ProxyCap: Real-time Monocular Full-body Capture in World Space via Human-Centric Proxy-to-Motion Learning

no code implementations3 Jul 2023 Yuxiang Zhang, Hongwen Zhang, Liangxiao Hu, Jiajun Zhang, Hongwei Yi, Shengping Zhang, Yebin Liu

For more accurate and physically plausible predictions in world space, our network is designed to learn human motions from a human-centric perspective, which enables the understanding of the same motion captured with different camera trajectories.

3D Human Pose Estimation

Control4D: Efficient 4D Portrait Editing with Text

no code implementations31 May 2023 Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu

We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions.

Learning Explicit Contact for Implicit Reconstruction of Hand-held Objects from Monocular Images

no code implementations31 May 2023 Junxing Hu, Hongwen Zhang, Zerui Chen, Mengcheng Li, Yunlong Wang, Yebin Liu, Zhenan Sun

In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space and leverage diffused contact probabilities to construct the implicit neural representation for the manipulated object.

Object

AvatarReX: Real-time Expressive Full-body Avatars

no code implementations8 May 2023 Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu

We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.

Disentanglement

LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar

no code implementations2 May 2023 Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, GuoJun Qi, Yebin Liu

Existing approaches to animatable NeRF-based head avatars are either built upon face templates or use the expression coefficients of templates as the driving signal.

StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video

1 code implementation1 May 2023 Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu

Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.

Face Reenactment Translation +1

PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling

1 code implementation25 Apr 2023 Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu

To this end, we present PoseVocab, a novel pose encoding method that encourages the network to discover the optimal pose embeddings for learning the dynamic human appearance.

CloSET: Modeling Clothed Humans on Continuous Surface with Explicit Template Decomposition

no code implementations CVPR 2023 Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu

In this way, the clothing deformations are disentangled such that the pose-dependent wrinkles can be better learned and applied to unseen poses.

Narrator: Towards Natural Control of Human-Scene Interaction Generation via Relationship Reasoning

no code implementations ICCV 2023 Haibiao Xuan, Xiongzheng Li, Jinsong Zhang, Hongwen Zhang, Yebin Liu, Kun Li

Also, we model global and local spatial relationships in a 3D scene and a textual description respectively based on the scene graph, and introduce a partlevel action mechanism to represent interactions as atomic body part states.

Delving Deep into Pixel Alignment Feature for Accurate Multi-view Human Mesh Recovery

no code implementations15 Jan 2023 Kai Jia, Hongwen Zhang, Liang An, Yebin Liu

The key components of a typical regressor lie in the feature extraction of input views and the fusion of multi-view features.

Human Mesh Recovery regression

Tensor4D: Efficient Neural 4D Decomposition for High-Fidelity Dynamic Reconstruction and Rendering

no code implementations CVPR 2023 Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu

The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.

Dynamic Reconstruction Tensor Decomposition

AvatarMAV: Fast 3D Head Avatar Reconstruction Using Motion-Aware Neural Voxels

no code implementations23 Nov 2022 Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, Yebin Liu

AvatarMAV is the first to model both the canonical appearance and the decoupled expression motion by neural voxels for head avatar.

Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic Reconstruction and Rendering

1 code implementation21 Nov 2022 Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu

The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.

Dynamic Reconstruction Tensor Decomposition

Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars

2 code implementations CVPR 2023 Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, Yebin Liu

We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.

Face Model

DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras

no code implementations16 Jul 2022 Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu

At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network.

3D Human Reconstruction 4k +2

Learning Implicit Templates for Point-Based Clothed Human Modeling

1 code implementation14 Jul 2022 Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu

We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing.

PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images

1 code implementation13 Jul 2022 Hongwen Zhang, Yating Tian, Yuxiang Zhang, Mengcheng Li, Liang An, Zhenan Sun, Yebin Liu

To address these issues, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression network for well-aligned human mesh recovery and extend it as PyMAF-X for the recovery of expressive full-body models.

Ranked #6 on 3D Human Pose Estimation on AGORA (using extra training data)

3D human pose and shape estimation Human Mesh Recovery +2

Geometry-aware Single-image Full-body Human Relighting

no code implementations11 Jul 2022 Chaonan Ji, Tao Yu, Kaiwen Guo, Jingxin Liu, Yebin Liu

For the relighting, we introduce a ray tracing-based per-pixel lighting representation that explicitly models high-frequency shadows and propose a learning-based shading refinement module to restore realistic shadows (including hard cast shadows) from the ray-traced shading maps.

Disentanglement Neural Rendering

AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture

1 code implementation5 Jul 2022 Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu

Then given a monocular RGB video of this subject, our method integrates information from both the image observation and the avatar prior, and accordingly recon-structs high-fidelity 3D textured models with dynamic details regardless of the visibility.

Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering

no code implementations20 Jun 2022 Gaochang Wu, Yuemei Zhou, Yebin Liu, Lu Fang, Tianyou Chai

In this paper, we present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.

Novel View Synthesis

FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction

no code implementations5 Jun 2022 Qiao Feng, Yebin Liu, Yu-Kun Lai, Jingyu Yang, Kun Li

Based on FOF, we design the first 30+FPS high-fidelity real-time monocular human reconstruction framework.

IDE-3D: Interactive Disentangled Editing for High-Resolution 3D-aware Portrait Synthesis

1 code implementation31 May 2022 Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, Yebin Liu

Existing 3D-aware facial generation methods face a dilemma in quality versus editability: they either generate editable results in low resolution or high-quality ones with no editing flexibility.

3D-Aware Image Synthesis

GIMO: Gaze-Informed Human Motion Prediction in Context

1 code implementation20 Apr 2022 Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, C. Karen Liu, Leonidas J. Guibas

We perform an extensive study of the benefits of leveraging the eye gaze for ego-centric human motion prediction with various state-of-the-art architectures.

Human motion prediction motion prediction

ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided Sampling

no code implementations7 Apr 2022 Yuemei Zhou, Tao Yu, Zerong Zheng, Ying Fu, Yebin Liu

Existing state-of-the-art novel view synthesis methods rely on either fairly accurate 3D geometry estimation or sampling of the entire space for neural volumetric rendering, which limit the overall efficiency.

Novel View Synthesis

Structured Local Radiance Fields for Human Avatar Modeling

no code implementations CVPR 2022 Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu

These local radiance fields not only leverage the flexibility of implicit representation in shape and appearance modeling, but also factorize cloth deformations into skeleton motions, node residual translations and the dynamic detail variations inside each individual radiance field.

FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset

1 code implementation CVPR 2022 Lizhen Wang, ZhiYuan Chen, Tao Yu, Chenguang Ma, Liang Li, Yebin Liu

In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc.

2k 3D Face Reconstruction +1

Recovering 3D Human Mesh from Monocular Images: A Survey

1 code implementation3 Mar 2022 Yating Tian, Hongwen Zhang, Yebin Liu, LiMin Wang

Since the release of statistical body models, 3D human mesh recovery has been drawing broader attention.

3D human pose and shape estimation Human Mesh Recovery

High-Fidelity Human Avatars From a Single RGB Camera

no code implementations CVPR 2022 Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li

To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures.

Neural Rendering Vocal Bursts Intensity Prediction

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars

no code implementations19 Dec 2021 Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker

To handle complicated motions (e. g., self-occlusions), we then leverage the encoded information on the UV manifold to construct a 3D volumetric representation based on a dynamic pose-conditioned neural radiance field.

Neural Rendering

Lightweight Multi-person Total Motion Capture Using Sparse Multi-view Cameras

no code implementations ICCV 2021 Yuxiang Zhang, Zhe Li, Liang An, Mengcheng Li, Tao Yu, Yebin Liu

Overall, we propose the first light-weight total capture system and achieves fast, robust and accurate multi-person total motion capture performance.

3D Multi-Person Pose Estimation

LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution Homography Estimation

no code implementations ICCV 2021 Ruizhi Shao, Gaochang Wu, Yuemei Zhou, Ying Fu, Yebin Liu

By combining the local transformer with the multiscale structure, the network is able to capture long-short range correspondences efficiently and accurately.

Homography Estimation

DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering

no code implementations CVPR 2022 Ruizhi Shao, Hongwen Zhang, He Zhang, Mingjia Chen, YanPei Cao, Tao Yu, Yebin Liu

We introduce DoubleField, a novel framework combining the merits of both surface field and radiance field for high-fidelity human reconstruction and rendering.

Transfer Learning

Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network

1 code implementation14 Apr 2021 Gaochang Wu, Yebin Liu, Lu Fang, Tianyou Chai

In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques.

Depth Estimation

POSEFusion: Pose-guided Selective Fusion for Single-view Human Volumetric Capture

no code implementations CVPR 2021 Zhe Li, Tao Yu, Zerong Zheng, Kaiwen Guo, Yebin Liu

By contributing a novel reconstruction framework which contains pose-guided keyframe selection and robust implicit surface fusion, our method fully utilizes the advantages of both tracking-based methods and tracking-free inference methods, and finally enables the high-fidelity reconstruction of dynamic surface details even in the invisible regions.

3D Reconstruction

Light Field Reconstruction Using Convolutional Network on EPI and Extended Applications

1 code implementation24 Mar 2021 Gaochang Wu, Yebin Liu, Lu Fang, Qionghai Dai, Tianyou Chai

The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling.

Training Weakly Supervised Video Frame Interpolation With Events

1 code implementation ICCV 2021 ZHIYANG YU, Yu Zhang, Deyuan Liu, Dongqing Zou, Xijun Chen, Yebin Liu, Jimmy S. Ren

Though trained on low frame-rate videos, our framework outperforms existing models trained with full high frame-rate videos (and events) on both GoPro dataset and a new real event-based dataset.

Video Frame Interpolation

PoNA: Pose-guided Non-local Attention for Human Pose Transfer

1 code implementation13 Dec 2020 Kun Li, Jinsong Zhang, Yebin Liu, Yu-Kun Lai, Qionghai Dai

In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer.

Generative Adversarial Network Person Re-Identification +1

Cross-MPI: Cross-scale Stereo for Image Super-Resolution using Multiplane Images

no code implementations CVPR 2021 Yuemei Zhou, Gaochang Wu, Ying Fu, Kun Li, Yebin Liu

Various combinations of cameras enrich computational photography, among which reference-based superresolution (RefSR) plays a critical role in multiscale imaging systems.

Image Super-Resolution

DeepCloth: Neural Garment Representation for Shape and Style Editing

no code implementations30 Nov 2020 Zhaoqi Su, Tao Yu, Yangang Wang, Yebin Liu

In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing.

Garment Reconstruction Position

Vehicle Reconstruction and Texture Estimation Using Deep Implicit Semantic Template Mapping

no code implementations30 Nov 2020 Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu

We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

Deep Implicit Templates for 3D Shape Representation

1 code implementation CVPR 2021 Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu

Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power.

3D Shape Representation

NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image

1 code implementation ECCV 2020 Lizhen Wang, Xiaochen Zhao, Tao Yu, Songtao Wang, Yebin Liu

We propose NormalGAN, a fast adversarial learning-based method to reconstruct the complete and detailed 3D human from a single RGB-D image.

3D Human Reconstruction Denoising

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

1 code implementation8 Jul 2020 Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai

To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.

3D Human Reconstruction Camera Calibration

Spatial-Angular Attention Network for Light Field Reconstruction

1 code implementation5 Jul 2020 Gaochang Wu, Yingqian Wang, Yebin Liu, Lu Fang, Tianyou Chai

In this paper, we propose a spatial-angular attention network to perceive correspondences in the light field non-locally, and reconstruction high angular resolution light field in an end-to-end manner.

MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera

no code implementations13 Apr 2020 Zhaoqi Su, Weilin Wan, Tao Yu, Lingjie Liu, Lu Fang, Wenping Wang, Yebin Liu

We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning.

Learning Event-Based Motion Deblurring

no code implementations CVPR 2020 Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, Yebin Liu

Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process.

Ranked #27 on Image Deblurring on GoPro (using extra training data)

Deblurring Image Deblurring

Robust 3D Self-portraits in Seconds

no code implementations CVPR 2020 Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

In this paper, we propose an efficient method for robust 3D self-portraits using a single RGBD camera.

SimulCap : Single-View Human Performance Capture with Cloth Simulation

no code implementations CVPR 2019 Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-Moll, Yebin Liu

This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e. g., cloth wrinkles) using a single RGBD camera.

DeepHuman: 3D Human Reconstruction from a Single Image

1 code implementation ICCV 2019 Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu

We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image.

3D Human Reconstruction Pose Estimation +1

Capture Dense: Markerless Motion Capture Meets Dense Pose Estimation

no code implementations5 Dec 2018 Xiu Li, Yebin Liu, Hanbyul Joo, Qionghai Dai, Yaser Sheikh

Specifically, we first introduce a novel markerless motion capture method that can take advantage of dense parsing capability provided by the dense pose detector.

Human Parsing Markerless Motion Capture +1

DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs

3 code implementations ECCV 2018 Shi Yan, Chenglei Wu, Lizhen Wang, Feng Xu, Liang An, Kaiwen Guo, Yebin Liu

Consumer depth sensors are more and more popular and come to our daily lives marked by its recent integration in the latest Iphone X.

Denoising

HybridFusion: Real-Time Performance Capture Using a Single Depth Sensor and Sparse IMUs

no code implementations ECCV 2018 Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu

We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).

Surface Reconstruction

CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping

1 code implementation ECCV 2018 Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, Lu Fang

The Reference-based Super-resolution (RefSR) super-resolves a low-resolution (LR) image given an external high-resolution (HR) reference image, where the reference image and LR image share similar viewpoint but with significant resolution gap x8.

Patch Matching Reference-based Super-Resolution

Structure from Recurrent Motion: From Rigidity to Recurrency

no code implementations CVPR 2018 Xiu Li, Hongdong Li, Hanbyul Joo, Yebin Liu, Yaser Sheikh

This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action.

Clustering

BodyFusion: Real-Time Capture of Human Motion and Surface Geometry Using a Single Depth Camera

no code implementations ICCV 2017 Tao Yu, Kaiwen Guo, Feng Xu, Yuan Dong, Zhaoqi Su, Jianhui Zhao, Jianguo Li, Qionghai Dai, Yebin Liu

To reduce the ambiguities of the non-rigid deformation parameterization on the surface graph nodes, we take advantage of the internal articulated motion prior for human performance and contribute a skeleton-embedded surface fusion (SSF) method.

Surface Reconstruction

SurfaceNet: An End-to-end 3D Neural Network for Multiview Stereopsis

3 code implementations ICCV 2017 Mengqi Ji, Juergen Gall, Haitian Zheng, Yebin Liu, Lu Fang

It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model.

Light Field Reconstruction Using Deep Convolutional Network on EPI

no code implementations CVPR 2017 Gaochang Wu, Mandan Zhao, Liangyong Wang, Qionghai Dai, Tianyou Chai, Yebin Liu

In this paper, we take advantage of the clear texture structure of the epipolar plane image (EPI) in the light field data and model the problem of light field reconstruction from a sparse set of views as a CNN-based angular detail restoration on EPI.

Turning an Urban Scene Video into a Cinemagraph

no code implementations CVPR 2017 Hang Yan, Yebin Liu, Yasutaka Furukawa

Our approach first warps an input video into the viewpoint of a reference camera.

FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras

no code implementations29 Oct 2016 Lan Xu, Lu Fang, Wei Cheng, Kaiwen Guo, Guyue Zhou, Qionghai Dai, Yebin Liu

We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera.

Markerless Motion Capture Visual Odometry

Robust Non-Rigid Motion Tracking and Surface Reconstruction Using L0 Regularization

no code implementations ICCV 2015 Kaiwen Guo, Feng Xu, Yangang Wang, Yebin Liu, Qionghai Dai

We present a new motion tracking method to robustly reconstruct non-rigid geometries and motions from single view depth inputs captured by a consumer depth sensor.

Surface Reconstruction

Learning High-level Prior with Convolutional Neural Networks for Semantic Segmentation

no code implementations22 Nov 2015 Haitian Zheng, Yebin Liu, Mengqi Ji, Feng Wu, Lu Fang

Finally, the optimization problem enables us to take advantage of state-of-the-art fully convolutional network structure for the implementation of the above encoders and decoder.

Image Segmentation Segmentation +2

Light Field From Micro-Baseline Image Pair

no code implementations CVPR 2015 Zhoutong Zhang, Yebin Liu, Qionghai Dai

We first introduce a disparity assisted phase based synthesis (DAPS) strategy that can integrate disparity infor- mation into the phase term of a reference image to warp it to its close neighbor views.

Fourier Analysis on Transient Imaging with a Multifrequency Time-of-Flight Camera

no code implementations CVPR 2014 Jingyu Lin, Yebin Liu, Matthias B. Hullin, Qionghai Dai

A transient image is the optical impulse response of a scene which visualizes light propagation during an ultra-short time interval.

Cannot find the paper you are looking for? You can Submit a new open access paper.