no code implementations • ECCV 2020 • Zhong Li, Yu Ji, Jingyi Yu, Jinwei Ye
In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow.
no code implementations • 2 Feb 2023 • Juze Zhang, Ye Shi, Lan Xu, Jingyi Yu, Jingya Wang
This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.
no code implementations • 15 Dec 2022 • Juze Zhang, Haimin Luo, Hongdi Yang, Xinru Xu, Qianyang Wu, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang
We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects.
no code implementations • 15 Dec 2022 • Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu
Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research.
1 code implementation • 8 Dec 2022 • Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors.
Ranked #1 on
Motion Synthesis
on KIT Motion-Language
no code implementations • 30 Nov 2022 • Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma
Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.
no code implementations • 22 Nov 2022 • Xiao Han, Peishan Cong, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma
LiDAR can capture accurate depth information in large-scale scenarios without the effect of light conditions, and the captured point cloud contains gait-related 3D geometric properties and dynamic motion characteristics.
no code implementations • 23 Oct 2022 • Qing Wu, Xin Li, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram.
no code implementations • 18 Sep 2022 • Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.
no code implementations • 14 Sep 2022 • Zesong Qiu, Yuwei Li, Dongming He, Qixuan Zhang, Longwen Zhang, Yinghao Zhang, Jingya Wang, Lan Xu, Xudong Wang, Yuyao Zhang, Jingyi Yu
Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results.
no code implementations • 12 Sep 2022 • Qing Wu, Ruimin Feng, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy that pushes the tomography image reconstruction quality over supervised deep learning CT reconstruction works.
no code implementations • 9 Sep 2022 • Ziyu Wang, Yu Deng, Jiaolong Yang, Jingyi Yu, Xin Tong
Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e. g., chairs) with large topological variance.
no code implementations • 16 Jul 2022 • Juze Zhang, Jingya Wang, Ye Shi, Fei Gao, Lan Xu, Jingyi Yu
This method first uses 2. 5D pose and geometry information to infer camera-centric root depths in a forward pass, and then exploits the root depths to further improve representation learning of 2. 5D pose estimation in a backward pass.
no code implementations • 3 Jul 2022 • Youjia Wang, Teng Xu, Yiwen Wu, Minzhang Li, Wenzheng Chen, Lan Xu, Jingyi Yu
We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model.
no code implementations • 30 May 2022 • Chengfeng Zhao, Yiming Ren, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma
We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using a single LiDAR and 4 IMUs.
1 code implementation • 26 May 2022 • Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.
no code implementations • CVPR 2022 • Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin Ma, Lan Xu, Jingyi Yu, Cheng Wang
Quantitative and qualitative experiments show that our method outperforms the techniques based only on RGB images.
no code implementations • 17 Mar 2022 • Han Liang, Yannan He, Chengfeng Zhao, Mutian Li, Jingya Wang, Jingyi Yu, Lan Xu
Monocular 3D motion capture (mocap) is beneficial to many applications.
Ranked #1 on
3D Human Pose Estimation
on AIST++
2 code implementations • 17 Mar 2022 • Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su
We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.
1 code implementation • CVPR 2022 • Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, Cheng Wang
We propose Human-centered 4D Scene Capture (HSC4D) to accurately and efficiently create a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, and rich interactions between humans and environments.
no code implementations • 8 Mar 2022 • Ziyu Wang, Wei Yang, Junming Cao, Lan Xu, Junqing Yu, Jingyi Yu
We present a novel neural refractive field(NeReF) to recover wavefront of transparent fluids by simultaneously estimating the surface position and normal of the fluid front.
no code implementations • CVPR 2022 • Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu
In this paper, we propose NeuralHOFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors.
no code implementations • 22 Feb 2022 • Yingqian Wang, Longguang Wang, Gaochang Wu, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo
In this paper, we propose a generic mechanism to disentangle these coupled information for LF image processing.
no code implementations • CVPR 2022 • Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.
no code implementations • 12 Feb 2022 • Jiakai Zhang, Liao Wang, Xinhang Liu, Fuqiang Zhao, Minzhang Li, Haizhao Dai, Boyuan Zhang, Wei Yang, Lan Xu, Jingyi Yu
We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets so that the aforementioned volumetric video viewing and editing, for the first time, can be conducted immersively in virtual 3D space.
1 code implementation • 11 Feb 2022 • Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, QIwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu
Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals.
no code implementations • 11 Feb 2022 • Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu
In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.
no code implementations • 9 Feb 2022 • Yuwei Li, Longwen Zhang, Zesong Qiu, Yingwenqi Jiang, Nianyi Li, Yuexin Ma, Yuyao Zhang, Lan Xu, Jingyi Yu
Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world.
no code implementations • 29 Dec 2021 • Zhengqing Pan, Ruiqian Li, Tian Gao, Zi Wang, Ping Liu, Siyuan Shen, Tao Wu, Jingyi Yu, Shiying Li
There has been an increasing interest in deploying non-line-of-sight (NLOS) imaging systems for recovering objects behind an obstacle.
no code implementations • CVPR 2022 • Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu
The raw HumanNeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings.
1 code implementation • 27 Oct 2021 • Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images.
no code implementations • 10 Sep 2021 • Tobias Jacobs, Jingyi Yu, Julia Gastinger, Timo Sztyler
We present a novel methodology to build powerful predictive process models.
no code implementations • 5 Sep 2021 • Yuqi Ding, Zhang Chen, Yu Ji, Jingyi Yu, Jinwei Ye
Recovering 3D geometry of underwater scenes is challenging because of non-linear refraction of light at the water-air interface caused by the camera housing.
no code implementations • 12 Aug 2021 • Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu
To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.
no code implementations • 1 Aug 2021 • Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu
In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.
Dynamic Reconstruction
Human-Object Interaction Detection
+3
no code implementations • 30 Jul 2021 • Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu
We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.
no code implementations • 14 Jul 2021 • Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu
To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.
no code implementations • 29 Jun 2021 • Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang
For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.
1 code implementation • 21 Jun 2021 • Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu
Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.
1 code implementation • 30 Apr 2021 • Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu
Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.
1 code implementation • 23 Apr 2021 • Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu
In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.
no code implementations • 6 Apr 2021 • Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.
1 code implementation • 5 Apr 2021 • Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.
1 code implementation • ICCV 2021 • Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu
In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.
2 code implementations • ICCV 2021 • Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
1 code implementation • ICCV 2021 • Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.
1 code implementation • 2 Jan 2021 • Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao, Shiying Li, Jingyi Yu
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
no code implementations • 13 Aug 2020 • Quan Meng, Jiakai Zhang, Qiang Hu, Xuming He, Jingyi Yu
We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN).
1 code implementation • 7 Jul 2020 • Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu
To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.
1 code implementation • 27 May 2020 • Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng
We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives.
1 code implementation • 20 Apr 2020 • Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo
A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.
1 code implementation • 17 Dec 2019 • Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo
Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information.
2 code implementations • CVPR 2020 • Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu
We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.
1 code implementation • 31 Aug 2019 • Jing Jin, Junhui Hou, Jie Chen, Huanqiang Zeng, Sam Kwong, Jingyi Yu
Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF.
1 code implementation • 23 Jul 2019 • Jing Jin, Junhui Hou, Jie Chen, Sam Kwong, Jingyi Yu
To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input.
1 code implementation • 30 May 2019 • Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao
Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.
no code implementations • 15 Apr 2019 • Zhong Li, Jinwei Ye, Yu Ji, Hao Sheng, Jingyi Yu
Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing the motion of injected particles.
no code implementations • 9 Apr 2019 • Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu
In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.
no code implementations • 4 Apr 2019 • Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu
We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.
1 code implementation • 4 Apr 2019 • Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu
In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.
no code implementations • 4 Apr 2019 • Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu
A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.
1 code implementation • ICCV 2019 • Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu
Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.
no code implementations • 7 Mar 2019 • Yuanxi Ma, Cen Wang, Shiying Li, Jingyi Yu
Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features.
no code implementations • 15 Oct 2018 • Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu
A surface light field represents the radiance of rays originating from any points on the surface in any directions.
no code implementations • CVPR 2018 • Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu
The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.
no code implementations • ECCV 2018 • Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao
Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.
no code implementations • ECCV 2018 • Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu
The bullet-time effect, presented in feature film ``The Matrix", has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion.
no code implementations • 7 Aug 2018 • Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, Jingyi Yu
Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model.
no code implementations • CVPR 2018 • Can Chen, Scott McCloskey, Jingyi Yu
With the rise of misinformation spread via social media channels, enabled by the increasing automation and realism of image manipulation tools, image forensics is an increasingly relevant problem.
no code implementations • CVPR 2018 • Yang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang, Jingyi Yu
The recovered layout is then used to guide shape estimation of the remaining objects using their normal information.
no code implementations • CVPR 2018 • Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao
This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.
no code implementations • 26 Mar 2018 • Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu
We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.
no code implementations • 31 Jan 2018 • Zhong Li, Yu Ji, Wei Yang, Jinwei Ye, Jingyi Yu
In multi-view human body capture systems, the recovered 3D geometry or even the acquired imagery data can be heavily corrupted due to occlusions, noise, limited field of- view, etc.
no code implementations • CVPR 2018 • Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu
We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.
no code implementations • 29 Nov 2017 • Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu
We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.
1 code implementation • 17 Oct 2017 • Li Yi, Lin Shao, Manolis Savva, Haibin Huang, Yang Zhou, Qirui Wang, Benjamin Graham, Martin Engelcke, Roman Klokov, Victor Lempitsky, Yuan Gan, Pengyu Wang, Kun Liu, Fenggen Yu, Panpan Shui, Bingyang Hu, Yan Zhang, Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Minki Jeong, Jaehoon Choi, Changick Kim, Angom Geetchandra, Narasimha Murthy, Bhargava Ramu, Bharadwaj Manda, M. Ramanathan, Gautam Kumar, P Preetham, Siddharth Srivastava, Swati Bhugra, Brejesh lall, Christian Haene, Shubham Tulsiani, Jitendra Malik, Jared Lafer, Ramsey Jones, Siyuan Li, Jie Lu, Shi Jin, Jingyi Yu, Qi-Xing Huang, Evangelos Kalogerakis, Silvio Savarese, Pat Hanrahan, Thomas Funkhouser, Hao Su, Leonidas Guibas
We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database.
1 code implementation • 9 Oct 2017 • Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu
Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.
no code implementations • ICCV 2017 • Yujia Xue, Kang Zhu, Qiang Fu, Xilin Chen, Jingyi Yu
In this paper, we present a single camera hyperspectral light field imaging solution that we call Snapshot Plenoptic Imager (SPI).
no code implementations • ICCV 2017 • Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu
In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.
no code implementations • 4 Sep 2017 • Kang Zhu, Yujia Xue, Qiang Fu, Sing Bing Kang, Xilin Chen, Jingyi Yu
There are two parts to extracting scene depth.
no code implementations • 2 Aug 2017 • Zhang Chen, Xinqing Guo, Siyuan Li, Xuan Cao, Jingyi Yu
Depth from defocus (DfD) and stereo matching are two most studied passive depth sensing schemes.
no code implementations • CVPR 2017 • Can Chen, Scott McCloskey, Jingyi Yu
Recent advances on image manipulation techniques have made image forgery detection increasingly more challenging.
no code implementations • 28 Mar 2017 • Wei Liu, Xiaogang Chen, Chunhua Shen, Jingyi Yu, Qiang Wu, Jie Yang
In this paper, we propose a general framework for Robust Guided Image Filtering (RGIF), which contains a data term and a smoothness term, to solve the two issues mentioned above.
no code implementations • 15 Aug 2016 • Hao Zhu, Qing Wang, Jingyi Yu
Occlusion is one of the most challenging problems in depth estimation.
no code implementations • CVPR 2016 • Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu
In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.
no code implementations • ICCV 2015 • Haiting Lin, Can Chen, Sing Bing Kang, Jingyi Yu
The other is a data consistency measure based on analysis-by-synthesis, i. e., the difference between the synthesized focal stack given the hypothesized depth map and that from the LF.
no code implementations • 19 Jun 2015 • Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai
Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations.
no code implementations • 17 Jun 2015 • Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu
A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.
no code implementations • 15 Jun 2015 • Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu
We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.
no code implementations • ICCV 2015 • Wei Yang, Haiting Lin, Sing Bing Kang, Jingyi Yu
We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error.
no code implementations • CVPR 2015 • Nianyi Li, Bilin Sun, Jingyi Yu
In this paper, we present a unified saliency detection framework for handling heterogenous types of input data.
no code implementations • CVPR 2015 • Wei Yang, Yu Ji, Haiting Lin, Yang Yang, Sing Bing Kang, Jingyi Yu
This enables a sparsity-prior based solution for iteratively recovering the surface normal, the surface albedo, and the visibility function from a small number of images.
no code implementations • CVPR 2014 • Can Chen, Haiting Lin, Zhan Yu, Sing Bing Kang, Jingyi Yu
Our bilateral consistency metric is used to indicate the probability of occlusions by analyzing the SCams.
no code implementations • CVPR 2014 • Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, Jingyi Yu
Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions.
no code implementations • CVPR 2014 • Erkang Cheng, Yu Pang, Ying Zhu, Jingyi Yu, Haibin Ling
Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion.
no code implementations • CVPR 2014 • Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu
When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts.
no code implementations • CVPR 2014 • Yu Ji, Jinwei Ye, Sing Bing Kang, Jingyi Yu
In particular, we show that linear tone mapping eliminates ringing but incurs severe contrast loss, while non-linear tone mapping functions such as Gamma curves slightly enhances contrast but introduces ringing.
no code implementations • CVPR 2013 • Yu Ji, Jinwei Ye, Jingyi Yu
By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths.
no code implementations • CVPR 2013 • Xiaogang Chen, Sing Bing Kang, Jie Yang, Jingyi Yu
PatchGPs treat image patches as nodes and patch differences as edge weights for computing the shortest (geodesic) paths.
no code implementations • CVPR 2013 • Jinwei Ye, Yu Ji, Jingyi Yu
Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP).