Search Results for author: Jingyi Yu

Found 70 papers, 16 papers with code

3D Fluid Flow Reconstruction Using Compact Light Field PIV

no code implementations ECCV 2020 Zhong Li, Yu Ji, Jingyi Yu, Jinwei Ye

In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow.

Optical Flow Estimation

ProcK: Machine Learning for Knowledge-Intensive Processes

no code implementations10 Sep 2021 Tobias Jacobs, Jingyi Yu, Julia Gastinger, Timo Sztyler

In this work, we introduce ProcK (Process & Knowledge), a novel pipeline to build business process prediction models that take into account both sequential data in the form of event logs and rich semantic information represented in a graph-structured knowledge base.

Underwater 3D Reconstruction Using Light Fields

no code implementations5 Sep 2021 Yuqi Ding, Yu Ji, Jingyi Yu, Jinwei Ye

We also develop a fast algorithm for locating the angular patches in presence of non-linear light paths.

3D Reconstruction Depth Estimation

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

no code implementations12 Aug 2021 Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu

To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.

Video Generation

Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions

no code implementations1 Aug 2021 Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu

In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.

Human-Object Interaction Detection Neural Rendering +2

Relightable Neural Video Portrait

no code implementations30 Jul 2021 Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu

With the ability to achieve simultaneous relighting and reenactment, we are able to improve the realism in a variety of virtual production and video rewrite applications.

Multi-Task Learning Neural Rendering

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

no code implementations14 Jul 2021 Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu

To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.

Neural Rendering

IREM: High-Resolution Magnetic Resonance (MR) Image Reconstruction via Implicit Neural Representation

no code implementations29 Jun 2021 Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang

For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.

Image Reconstruction Super-Resolution

PIANO: A Parametric Hand Bone Model from Magnetic Resonance Imaging

no code implementations21 Jun 2021 Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu

Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.

Action Understanding

Editable Free-viewpoint Video Using a Layered Neural Representation

no code implementations30 Apr 2021 Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu

Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.

Scene Parsing Video Generation

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos

1 code implementation23 Apr 2021 Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu

In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.

Markerless Motion Capture

MirrorNeRF: One-shot Neural Portrait Radiance Field from Multi-mirror Catadioptric Imaging

no code implementations6 Apr 2021 Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.

Convolutional Neural Opacity Radiance Fields

no code implementations5 Apr 2021 Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.

Neural Video Portrait Relighting in Real-time via Consistency Modeling

1 code implementation1 Apr 2021 Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu

In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.

GNeRF: GAN-based Neural Radiance Field without Posed Camera

1 code implementation29 Mar 2021 Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu

We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.

Novel View Synthesis

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

1 code implementation29 Mar 2021 Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su

We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.

Neural Rendering

Non-line-of-Sight Imaging via Neural Transient Fields

1 code implementation2 Jan 2021 Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao, Shiying Li, Jingyi Yu

We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.

LGNN: A Context-aware Line Segment Detector

no code implementations13 Aug 2020 Quan Meng, Jiakai Zhang, Qiang Hu, Xuming He, Jingyi Yu

We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN).

Line Segment Detection

SofGAN: A Portrait Image Generator with Dynamic Styling

1 code implementation7 Jul 2020 Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu

To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.

2D Semantic Segmentation Image Generation +1

AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph

1 code implementation27 May 2020 Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng

We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives.

3D Reconstruction Instance Segmentation +1

OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax Layer

1 code implementation20 Apr 2020 Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo

A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.

Classification General Classification

Spatial-Angular Interaction for Light Field Image Super-Resolution

1 code implementation17 Dec 2019 Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo

Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information.

Image Super-Resolution SSIM

A Neural Rendering Framework for Free-Viewpoint Relighting

2 code implementations CVPR 2020 Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu

We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.

Neural Rendering Novel View Synthesis

Deep Coarse-to-fine Dense Light Field Reconstruction with Flexible Sampling and Geometry-aware Fusion

1 code implementation31 Aug 2019 Jing Jin, Junhui Hou, Jie Chen, Huanqiang Zeng, Sam Kwong, Jingyi Yu

Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF.

Depth Estimation Virtual Reality

Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses

no code implementations23 Jul 2019 Jing Jin, Junhui Hou, Jie Chen, Sam Kwong, Jingyi Yu

To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input.

Super-Resolution

Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

1 code implementation30 May 2019 Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao

Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.

Amodal Instance Segmentation Semantic Segmentation

PIV-Based 3D Fluid Flow Reconstruction Using Light Field Camera

no code implementations15 Apr 2019 Zhong Li, Jinwei Ye, Yu Ji, Hao Sheng, Jingyi Yu

Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing the motion of injected particles.

Depth Estimation Optical Flow Estimation

Non-Lambertian Surface Shape and Reflectance Reconstruction Using Concentric Multi-Spectral Light Field

no code implementations9 Apr 2019 Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu

In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.

Depth Estimation

Generic Multiview Visual Tracking

no code implementations4 Apr 2019 Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu

A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.

Trajectory Prediction Visual Tracking

TightCap: 3D Human Shape Capture with Clothing Tightness Field

1 code implementation4 Apr 2019 Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu

In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.

Virtual Try-on

3D Face Reconstruction Using Color Photometric Stereo with Uncalibrated Near Point Lights

no code implementations4 Apr 2019 Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu

We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.

3D Face Reconstruction Semantic Segmentation

Photo-Realistic Facial Details Synthesis from Single Image

1 code implementation ICCV 2019 Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu

Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.

Face Generation

Hair Segmentation on Time-of-Flight RGBD Images

no code implementations7 Mar 2019 Yuanxi Ma, Cen Wang, Shiying Li, Jingyi Yu

Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features.

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Image Registration

4D Human Body Correspondences from Panoramic Depth Maps

no code implementations CVPR 2018 Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu

The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.

Learning to Dodge A Bullet: Concyclic View Morphing via Deep Learning

no code implementations ECCV 2018 Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu

The bullet-time effect, presented in feature film ``The Matrix", has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion.

Rectification

Saliency Detection in 360° Videos

no code implementations ECCV 2018 Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao

Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.

Video Saliency Detection

A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras

no code implementations7 Aug 2018 Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, Jingyi Yu

Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model.

3D Reconstruction

Automatic 3D Indoor Scene Modeling From Single Panorama

no code implementations CVPR 2018 Yang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang, Jingyi Yu

The recovered layout is then used to guide shape estimation of the remaining objects using their normal information.

Object Detection

Focus Manipulation Detection via Photometric Histogram Analysis

no code implementations CVPR 2018 Can Chen, Scott McCloskey, Jingyi Yu

With the rise of misinformation spread via social media channels, enabled by the increasing automation and realism of image manipulation tools, image forensics is an increasingly relevant problem.

Image Forensics Image Manipulation +1

Gaze Prediction in Dynamic 360° Immersive Videos

no code implementations CVPR 2018 Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao

This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.

Eye Tracking Gaze Prediction

Semantic See-Through Rendering on Light Fields

no code implementations26 Mar 2018 Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu

We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.

Stereo Matching Stereo Matching Hand

Robust 3D Human Motion Reconstruction Via Dynamic Template Construction

no code implementations31 Jan 2018 Zhong Li, Yu Ji, Wei Yang, Jinwei Ye, Jingyi Yu

In multi-view human body capture systems, the recovered 3D geometry or even the acquired imagery data can be heavily corrupted due to occlusions, noise, limited field of- view, etc.

Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

no code implementations29 Nov 2017 Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu

We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.

Stereo Matching Stereo Matching Hand

Sparse Photometric 3D Face Reconstruction Guided by Morphable Models

no code implementations CVPR 2018 Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu

We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.

3D Face Reconstruction Semantic Segmentation

Personalized Saliency and its Prediction

1 code implementation9 Oct 2017 Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu

Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.

Saliency Detection

Catadioptric HyperSpectral Light Field Imaging

no code implementations ICCV 2017 Yujia Xue, Kang Zhu, Qiang Fu, Xilin Chen, Jingyi Yu

In this paper, we present a single camera hyperspectral light field imaging solution that we call Snapshot Plenoptic Imager (SPI).

Ray Space Features for Plenoptic Structure-From-Motion

no code implementations ICCV 2017 Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu

In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.

Structure from Motion

Robust Guided Image Filtering

no code implementations28 Mar 2017 Wei Liu, Xiaogang Chen, Chunhua Shen, Jingyi Yu, Qiang Wu, Jie Yang

In this paper, we propose a general framework for Robust Guided Image Filtering (RGIF), which contains a data term and a smoothness term, to solve the two issues mentioned above.

Rotational Crossed-Slit Light Field

no code implementations CVPR 2016 Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu

In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.

Stereo Matching Stereo Matching Hand

Depth Recovery From Light Field Using Focal Stack Symmetry

no code implementations ICCV 2015 Haiting Lin, Can Chen, Sing Bing Kang, Jingyi Yu

The other is a data consistency measure based on analysis-by-synthesis, i. e., the difference between the synthesized focal stack given the hypothesized depth map and that from the LF.

Scene-adaptive Coded Apertures Imaging

no code implementations19 Jun 2015 Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai

Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations.

Robust High Quality Image Guided Depth Upsampling

no code implementations17 Jun 2015 Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu

A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.

Automatic Layer Separation using Light Field Imaging

no code implementations15 Jun 2015 Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu

We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.

Resolving Scale Ambiguity Via XSlit Aspect Ratio Analysis

no code implementations ICCV 2015 Wei Yang, Haiting Lin, Sing Bing Kang, Jingyi Yu

We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error.

3D Reconstruction

A Weighted Sparse Coding Framework for Saliency Detection

no code implementations CVPR 2015 Nianyi Li, Bilin Sun, Jingyi Yu

In this paper, we present a unified saliency detection framework for handling heterogenous types of input data.

Saliency Detection Stereo Matching +1

Ambient Occlusion via Compressive Visibility Estimation

no code implementations CVPR 2015 Wei Yang, Yu Ji, Haiting Lin, Yang Yang, Sing Bing Kang, Jingyi Yu

This enables a sparsity-prior based solution for iteratively recovering the surface normal, the surface albedo, and the visibility function from a small number of images.

Compressive Sensing

Aliasing Detection and Reduction in Plenoptic Imaging

no code implementations CVPR 2014 Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu

When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts.

Demosaicking

Curvilinear Structure Tracking by Low Rank Tensor Approximation with Model Propagation

no code implementations CVPR 2014 Erkang Cheng, Yu Pang, Ying Zhu, Jingyi Yu, Haibin Ling

Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion.

Motion Compensation

Image Pre-compensation: Balancing Contrast and Ringing

no code implementations CVPR 2014 Yu Ji, Jinwei Ye, Sing Bing Kang, Jingyi Yu

In particular, we show that linear tone mapping eliminates ringing but incurs severe contrast loss, while non-linear tone mapping functions such as Gamma curves slightly enhances contrast but introduces ringing.

Tone Mapping

Saliency Detection on Light Field

no code implementations CVPR 2014 Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, Jingyi Yu

Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions.

Saliency Detection

Fast Patch-Based Denoising Using Approximated Patch Geodesic Paths

no code implementations CVPR 2013 Xiaogang Chen, Sing Bing Kang, Jie Yang, Jingyi Yu

PatchGPs treat image patches as nodes and patch differences as edge weights for computing the shortest (geodesic) paths.

Image Denoising

Manhattan Scene Understanding via XSlit Imaging

no code implementations CVPR 2013 Jinwei Ye, Yu Ji, Jingyi Yu

Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP).

Scene Understanding

Reconstructing Gas Flows Using Light-Path Approximation

no code implementations CVPR 2013 Yu Ji, Jinwei Ye, Jingyi Yu

By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths.

Cannot find the paper you are looking for? You can Submit a new open access paper.