Search Results for author: Jingyi Yu

Found 100 papers, 26 papers with code

3D Fluid Flow Reconstruction Using Compact Light Field PIV

no code implementations ECCV 2020 Zhong Li, Yu Ji, Jingyi Yu, Jinwei Ye

In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow.

Optical Flow Estimation

IKOL: Inverse kinematics optimization layer for 3D human pose and shape estimation via Gauss-Newton differentiation

no code implementations2 Feb 2023 Juze Zhang, Ye Shi, Lan Xu, Jingyi Yu, Jingya Wang

This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.

3D human pose and shape estimation regression

NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions

no code implementations15 Dec 2022 Juze Zhang, Haimin Luo, Hongdi Yang, Xinru Xu, Qianyang Wu, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang

We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects.

Human-Object Interaction Detection

Relightable Neural Human Assets from Multi-view Gradient Illuminations

no code implementations15 Dec 2022 Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu

Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research.

Image Relighting Novel View Synthesis

Executing your Commands via Motion Diffusion in Latent Space

1 code implementation8 Dec 2022 Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu

We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors.

Motion Synthesis

Weakly Supervised 3D Multi-person Pose Estimation for Large-scale Scenes based on Monocular Camera and Single LiDAR

no code implementations30 Nov 2022 Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.

3D Multi-Person Pose Estimation 3D Pose Estimation +2

LiCamGait: Gait Recognition in the Wild by Using LiDAR and Camera Multi-modal Visual Sensors

no code implementations22 Nov 2022 Xiao Han, Peishan Cong, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

LiDAR can capture accurate depth information in large-scale scenarios without the effect of light conditions, and the captured point cloud contains gait-related 3D geometric properties and dynamic motion characteristics.

Gait Recognition in the Wild

Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field

no code implementations23 Oct 2022 Qing Wu, Xin Li, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram.

Human Performance Modeling and Rendering via Neural Animated Mesh

no code implementations18 Sep 2022 Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.

SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator

no code implementations14 Sep 2022 Zesong Qiu, Yuwei Li, Dongming He, Qixuan Zhang, Longwen Zhang, Yinghao Zhang, Jingya Wang, Lan Xu, Xudong Wang, Yuyao Zhang, Jingyi Yu

Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results.

Computed Tomography (CT)

Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography

no code implementations12 Sep 2022 Qing Wu, Ruimin Feng, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy that pushes the tomography image reconstruction quality over supervised deep learning CT reconstruction works.

Image Reconstruction

Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects

no code implementations9 Sep 2022 Ziyu Wang, Yu Deng, Jiaolong Yang, Jingyi Yu, Xin Tong

Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e. g., chairs) with large topological variance.

Disentanglement Image Generation

Mutual Adaptive Reasoning for Monocular 3D Multi-Person Pose Estimation

no code implementations16 Jul 2022 Juze Zhang, Jingya Wang, Ye Shi, Fei Gao, Lan Xu, Jingyi Yu

This method first uses 2. 5D pose and geometry information to infer camera-centric root depths in a forward pass, and then exploits the root depths to further improve representation learning of 2. 5D pose estimation in a backward pass.

3D Multi-Person Pose Estimation Depth Estimation +2

NARRATE: A Normal Assisted Free-View Portrait Stylizer

no code implementations3 Jul 2022 Youjia Wang, Teng Xu, Yiwen Wu, Minzhang Li, Wenzheng Chen, Lan Xu, Jingyi Yu

We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model.

Face Model Neural Rendering +1

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

no code implementations30 May 2022 Chengfeng Zhao, Yiming Ren, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using a single LiDAR and 4 IMUs.

Translation

PREF: Phasorial Embedding Fields for Compact Neural Representations

1 code implementation26 May 2022 Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu

We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.

TensoRF: Tensorial Radiance Fields

2 code implementations17 Mar 2022 Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su

We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR

1 code implementation CVPR 2022 Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, Cheng Wang

We propose Human-centered 4D Scene Capture (HSC4D) to accurately and efficiently create a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, and rich interactions between humans and environments.

3D Human Pose Estimation Autonomous Driving

NeReF: Neural Refractive Field for Fluid Surface Reconstruction and Implicit Representation

no code implementations8 Mar 2022 Ziyu Wang, Wei Yang, Junming Cao, Lan Xu, Junqing Yu, Jingyi Yu

We present a novel neural refractive field(NeReF) to recover wavefront of transparent fluids by simultaneously estimating the surface position and normal of the fluid front.

Surface Reconstruction

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

no code implementations CVPR 2022 Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.

NeuVV: Neural Volumetric Videos with Immersive Rendering and Editing

no code implementations12 Feb 2022 Jiakai Zhang, Liao Wang, Xinhang Liu, Fuqiang Zhao, Minzhang Li, Haizhao Dai, Boyuan Zhang, Wei Yang, Lan Xu, Jingyi Yu

We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets so that the aforementioned volumetric video viewing and editing, for the first time, can be conducted immersively in virtual 3D space.

3D Reconstruction

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

1 code implementation11 Feb 2022 Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, QIwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu

Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals.

Motion Synthesis

Video-driven Neural Physically-based Facial Asset for Production

no code implementations11 Feb 2022 Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu

In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.

motion retargeting Texture Synthesis

NIMBLE: A Non-rigid Hand Model with Bones and Muscles

no code implementations9 Feb 2022 Yuwei Li, Longwen Zhang, Zesong Qiu, Yingwenqi Jiang, Nianyi Li, Yuexin Ma, Yuyao Zhang, Lan Xu, Jingyi Yu

Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world.

Onsite Non-Line-of-Sight Imaging via Online Calibrations

no code implementations29 Dec 2021 Zhengqing Pan, Ruiqian Li, Tian Gao, Zi Wang, Ping Liu, Siyuan Shen, Tao Wu, Jingyi Yu, Shiying Li

There has been an increasing interest in deploying non-line-of-sight (NLOS) imaging systems for recovering objects behind an obstacle.

HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs

no code implementations CVPR 2022 Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu

The raw HumanNeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings.

An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation

1 code implementation27 Oct 2021 Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images.

Image Reconstruction Image Super-Resolution +1

Light Field-Based Underwater 3D Reconstruction Via Angular Resampling

no code implementations5 Sep 2021 Yuqi Ding, Zhang Chen, Yu Ji, Jingyi Yu, Jinwei Ye

Recovering 3D geometry of underwater scenes is challenging because of non-linear refraction of light at the water-air interface caused by the camera housing.

3D Reconstruction Depth Estimation

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

no code implementations12 Aug 2021 Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu

To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.

Video Generation

Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions

no code implementations1 Aug 2021 Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu

In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.

Dynamic Reconstruction Human-Object Interaction Detection +3

Neural Relighting and Expression Transfer On Video Portraits

no code implementations30 Jul 2021 Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu

We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.

Multi-Task Learning Neural Rendering

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

no code implementations14 Jul 2021 Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu

To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.

Neural Rendering

IREM: High-Resolution Magnetic Resonance (MR) Image Reconstruction via Implicit Neural Representation

no code implementations29 Jun 2021 Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang

For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.

Anatomy Image Reconstruction +1

PIANO: A Parametric Hand Bone Model from Magnetic Resonance Imaging

1 code implementation21 Jun 2021 Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu

Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.

Action Understanding

Editable Free-viewpoint Video Using a Layered Neural Representation

1 code implementation30 Apr 2021 Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu

Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.

Disentanglement Scene Parsing +1

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos

1 code implementation23 Apr 2021 Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu

In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.

Action Assessment Markerless Motion Capture

MirrorNeRF: One-shot Neural Portrait Radiance Field from Multi-mirror Catadioptric Imaging

no code implementations6 Apr 2021 Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.

Convolutional Neural Opacity Radiance Fields

1 code implementation5 Apr 2021 Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.

Neural Video Portrait Relighting in Real-time via Consistency Modeling

1 code implementation ICCV 2021 Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu

In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.

Disentanglement Single-Image Portrait Relighting

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

2 code implementations ICCV 2021 Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su

We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.

Neural Rendering

GNeRF: GAN-based Neural Radiance Field without Posed Camera

1 code implementation ICCV 2021 Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu

We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.

Novel View Synthesis

Non-line-of-Sight Imaging via Neural Transient Fields

1 code implementation2 Jan 2021 Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao, Shiying Li, Jingyi Yu

We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.

LGNN: A Context-aware Line Segment Detector

no code implementations13 Aug 2020 Quan Meng, Jiakai Zhang, Qiang Hu, Xuming He, Jingyi Yu

We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN).

Line Segment Detection

SofGAN: A Portrait Image Generator with Dynamic Styling

1 code implementation7 Jul 2020 Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu

To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.

2D Semantic Segmentation Image Generation +1

AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph

1 code implementation27 May 2020 Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng

We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives.

3D Reconstruction Instance Segmentation +1

OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax Layer

1 code implementation20 Apr 2020 Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo

A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.

Classification General Classification

Spatial-Angular Interaction for Light Field Image Super-Resolution

1 code implementation17 Dec 2019 Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo

Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information.

Image Super-Resolution SSIM

A Neural Rendering Framework for Free-Viewpoint Relighting

2 code implementations CVPR 2020 Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu

We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.

Neural Rendering Novel View Synthesis

Deep Coarse-to-fine Dense Light Field Reconstruction with Flexible Sampling and Geometry-aware Fusion

1 code implementation31 Aug 2019 Jing Jin, Junhui Hou, Jie Chen, Huanqiang Zeng, Sam Kwong, Jingyi Yu

Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF.

Depth Estimation

Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses

1 code implementation23 Jul 2019 Jing Jin, Junhui Hou, Jie Chen, Sam Kwong, Jingyi Yu

To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input.

Super-Resolution

Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

1 code implementation30 May 2019 Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao

Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.

Amodal Instance Segmentation Semantic Segmentation

PIV-Based 3D Fluid Flow Reconstruction Using Light Field Camera

no code implementations15 Apr 2019 Zhong Li, Jinwei Ye, Yu Ji, Hao Sheng, Jingyi Yu

Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing the motion of injected particles.

Depth Estimation Optical Flow Estimation

Non-Lambertian Surface Shape and Reflectance Reconstruction Using Concentric Multi-Spectral Light Field

no code implementations9 Apr 2019 Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu

In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.

Depth Estimation

3D Face Reconstruction Using Color Photometric Stereo with Uncalibrated Near Point Lights

no code implementations4 Apr 2019 Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu

We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.

3D Face Reconstruction Semantic Segmentation

TightCap: 3D Human Shape Capture with Clothing Tightness Field

1 code implementation4 Apr 2019 Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu

In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.

Virtual Try-on

Generic Multiview Visual Tracking

no code implementations4 Apr 2019 Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu

A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.

Camera Calibration Trajectory Prediction +1

Photo-Realistic Facial Details Synthesis from Single Image

1 code implementation ICCV 2019 Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu

Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.

Face Generation

Hair Segmentation on Time-of-Flight RGBD Images

no code implementations7 Mar 2019 Yuanxi Ma, Cen Wang, Shiying Li, Jingyi Yu

Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features.

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Data Compression Image Registration

4D Human Body Correspondences from Panoramic Depth Maps

no code implementations CVPR 2018 Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu

The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.

Saliency Detection in 360° Videos

no code implementations ECCV 2018 Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao

Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.

Video Saliency Detection

Learning to Dodge A Bullet: Concyclic View Morphing via Deep Learning

no code implementations ECCV 2018 Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu

The bullet-time effect, presented in feature film ``The Matrix", has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion.

A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras

no code implementations7 Aug 2018 Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, Jingyi Yu

Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model.

3D Reconstruction

Focus Manipulation Detection via Photometric Histogram Analysis

no code implementations CVPR 2018 Can Chen, Scott McCloskey, Jingyi Yu

With the rise of misinformation spread via social media channels, enabled by the increasing automation and realism of image manipulation tools, image forensics is an increasingly relevant problem.

Image Forensics Image Manipulation +1

Automatic 3D Indoor Scene Modeling From Single Panorama

no code implementations CVPR 2018 Yang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang, Jingyi Yu

The recovered layout is then used to guide shape estimation of the remaining objects using their normal information.

object-detection Object Detection +1

Gaze Prediction in Dynamic 360° Immersive Videos

no code implementations CVPR 2018 Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao

This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.

Gaze Prediction

Semantic See-Through Rendering on Light Fields

no code implementations26 Mar 2018 Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu

We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.

Stereo Matching Stereo Matching Hand

Robust 3D Human Motion Reconstruction Via Dynamic Template Construction

no code implementations31 Jan 2018 Zhong Li, Yu Ji, Wei Yang, Jinwei Ye, Jingyi Yu

In multi-view human body capture systems, the recovered 3D geometry or even the acquired imagery data can be heavily corrupted due to occlusions, noise, limited field of- view, etc.

Sparse Photometric 3D Face Reconstruction Guided by Morphable Models

no code implementations CVPR 2018 Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu

We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.

3D Face Reconstruction Semantic Segmentation

Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

no code implementations29 Nov 2017 Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu

We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.

Stereo Matching Stereo Matching Hand

Personalized Saliency and its Prediction

1 code implementation9 Oct 2017 Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu

Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.

Saliency Detection

Catadioptric HyperSpectral Light Field Imaging

no code implementations ICCV 2017 Yujia Xue, Kang Zhu, Qiang Fu, Xilin Chen, Jingyi Yu

In this paper, we present a single camera hyperspectral light field imaging solution that we call Snapshot Plenoptic Imager (SPI).

Ray Space Features for Plenoptic Structure-From-Motion

no code implementations ICCV 2017 Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu

In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.

Robust Guided Image Filtering

no code implementations28 Mar 2017 Wei Liu, Xiaogang Chen, Chunhua Shen, Jingyi Yu, Qiang Wu, Jie Yang

In this paper, we propose a general framework for Robust Guided Image Filtering (RGIF), which contains a data term and a smoothness term, to solve the two issues mentioned above.

Rotational Crossed-Slit Light Field

no code implementations CVPR 2016 Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu

In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.

Stereo Matching Stereo Matching Hand

Depth Recovery From Light Field Using Focal Stack Symmetry

no code implementations ICCV 2015 Haiting Lin, Can Chen, Sing Bing Kang, Jingyi Yu

The other is a data consistency measure based on analysis-by-synthesis, i. e., the difference between the synthesized focal stack given the hypothesized depth map and that from the LF.

Scene-adaptive Coded Apertures Imaging

no code implementations19 Jun 2015 Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai

Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations.

Robust High Quality Image Guided Depth Upsampling

no code implementations17 Jun 2015 Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu

A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.

Automatic Layer Separation using Light Field Imaging

no code implementations15 Jun 2015 Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu

We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.

Resolving Scale Ambiguity Via XSlit Aspect Ratio Analysis

no code implementations ICCV 2015 Wei Yang, Haiting Lin, Sing Bing Kang, Jingyi Yu

We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error.

3D Reconstruction

A Weighted Sparse Coding Framework for Saliency Detection

no code implementations CVPR 2015 Nianyi Li, Bilin Sun, Jingyi Yu

In this paper, we present a unified saliency detection framework for handling heterogenous types of input data.

Saliency Detection Stereo Matching +2

Ambient Occlusion via Compressive Visibility Estimation

no code implementations CVPR 2015 Wei Yang, Yu Ji, Haiting Lin, Yang Yang, Sing Bing Kang, Jingyi Yu

This enables a sparsity-prior based solution for iteratively recovering the surface normal, the surface albedo, and the visibility function from a small number of images.

Compressive Sensing

Saliency Detection on Light Field

no code implementations CVPR 2014 Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, Jingyi Yu

Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions.

Saliency Detection

Curvilinear Structure Tracking by Low Rank Tensor Approximation with Model Propagation

no code implementations CVPR 2014 Erkang Cheng, Yu Pang, Ying Zhu, Jingyi Yu, Haibin Ling

Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion.

Motion Compensation

Aliasing Detection and Reduction in Plenoptic Imaging

no code implementations CVPR 2014 Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu

When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts.

Demosaicking

Image Pre-compensation: Balancing Contrast and Ringing

no code implementations CVPR 2014 Yu Ji, Jinwei Ye, Sing Bing Kang, Jingyi Yu

In particular, we show that linear tone mapping eliminates ringing but incurs severe contrast loss, while non-linear tone mapping functions such as Gamma curves slightly enhances contrast but introduces ringing.

Tone Mapping

Reconstructing Gas Flows Using Light-Path Approximation

no code implementations CVPR 2013 Yu Ji, Jinwei Ye, Jingyi Yu

By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths.

Fast Patch-Based Denoising Using Approximated Patch Geodesic Paths

no code implementations CVPR 2013 Xiaogang Chen, Sing Bing Kang, Jie Yang, Jingyi Yu

PatchGPs treat image patches as nodes and patch differences as edge weights for computing the shortest (geodesic) paths.

Image Denoising

Manhattan Scene Understanding via XSlit Imaging

no code implementations CVPR 2013 Jinwei Ye, Yu Ji, Jingyi Yu

Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP).

Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.