Search Results for author: Jingyi Yu

Found 141 papers, 41 papers with code

3D Fluid Flow Reconstruction Using Compact Light Field PIV

no code implementations ECCV 2020 Zhong Li, Yu Ji, Jingyi Yu, Jinwei Ye

In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow.

Optical Flow Estimation

Zero-Shot Image Denoising for High-Resolution Electron Microscopy

1 code implementation20 Jun 2024 Xuanyu Tian, Zhuoya Dong, Xiyue Lin, Yue Gao, Hongjiang Wei, Yanhang Ma, Jingyi Yu, Yuyao Zhang

Noise2SR trains the network with paired noisy images of different resolutions, which is conducted via SR strategy.

Data Augmentation Image Denoising +2

MeshXL: Neural Coordinate Field for Generative 3D Foundation Models

1 code implementation31 May 2024 Sijin Chen, Xin Chen, Anqi Pang, Xianfang Zeng, Wei Cheng, Yijun Fu, Fukun Yin, Yanru Wang, Zhibin Wang, Chi Zhang, Jingyi Yu, Gang Yu, Bin Fu, Tao Chen

The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications.

Language Modelling Large Language Model

CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets

no code implementations30 May 2024 Longwen Zhang, Ziyu Wang, Qixuan Zhang, QIwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, Jingyi Yu

To narrow this disparity, we introduce CLAY, a 3D geometry and material generator designed to effortlessly transform human imagination into intricate 3D digital structures.


Unsupervised Density Neural Representation for CT Metal Artifact Reduction

no code implementations11 May 2024 Qing Wu, Xu Guo, Lixuan Chen, Dongming He, Hongjiang Wei, Xudong Wang, S. Kevin Zhou, Yifeng Zhang, Jingyi Yu, Yuyao Zhang

Specifically, we decompose the energy-dependent LACs into energy-independent densities and energy-dependent mass attenuation coefficients (MACs) by fully considering the physical model of X-ray absorption.

Image Inpainting Metal Artifact Reduction

DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction

no code implementations27 Apr 2024 Chenhe Du, Xiyue Lin, Qing Wu, Xuanyu Tian, Ying Su, Zhe Luo, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang

However, the unsupervised nature of INR architecture imposes limited constraints on the solution space, particularly for the highly ill-posed reconstruction task posed by LACT and ultra-SVCT.

LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives

no code implementations15 Apr 2024 Jiadi Cui, Junming Cao, Fuqiang Zhao, Zhipeng He, Yifan Chen, Yuhui Zhong, Lan Xu, Yujiao Shi, Yingliang Zhang, Jingyi Yu

Large garages are ubiquitous yet intricate scenes that present unique challenges due to their monotonous colors, repetitive patterns, reflective surfaces, and transparent vehicle glass.

3D Reconstruction Camera Pose Estimation +1

A Unified Diffusion Framework for Scene-aware Human Motion Estimation from Sparse Signals

1 code implementation CVPR 2024 Jiangnan Tang, Jingya Wang, Kaiyang Ji, Lan Xu, Jingyi Yu, Ye Shi

One of the biggest challenges to this task is the one-to-many mapping from sparse observations to dense full-body motions, which endowed inherent ambiguities.

Motion Estimation

Gaze-guided Hand-Object Interaction Synthesis: Benchmark and Method

no code implementations24 Mar 2024 Jie Tian, Lingxiao Yang, Ran Ji, Yuexin Ma, Lan Xu, Jingyi Yu, Ye Shi, Jingya Wang

Here, the object motion diffusion model generates sequences of object motions based on gaze conditions, while the hand motion diffusion model produces hand motions based on the generated object motion.

Denoising Human motion prediction +2

THOR: Text to Human-Object Interaction Diffusion via Relation Intervention

no code implementations17 Mar 2024 Qianyang Wu, Ye Shi, Xiaoshui Huang, Jingyi Yu, Lan Xu, Jingya Wang

This paper addresses new methodologies to deal with the challenging task of generating dynamic Human-Object Interactions from textual descriptions (Text2HOI).

Diversity Human-Object Interaction Detection +2

LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment

no code implementations CVPR 2024 Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma

For human-centric large-scale scenes, fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications.

Scene Understanding

RealDex: Towards Human-like Grasping for Robotic Dexterous Hand

no code implementations21 Feb 2024 Yumeng Liu, Yaxun Yang, Youzhuo Wang, Xiaofei Wu, Jiamin Wang, Yichen Yao, Sören Schwertfeger, Sibei Yang, Wenping Wang, Jingyi Yu, Xuming He, Yuexin Ma

In this paper, we introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns, enriched by multi-view and multimodal visual data.

Guidance with Spherical Gaussian Constraint for Conditional Diffusion

1 code implementation5 Feb 2024 Lingxiao Yang, Shutong Ding, Yifan Cai, Jingyi Yu, Jingya Wang, Ye Shi

We theoretically show the existence of manifold deviation by establishing a certain lower bound for the estimation error of the loss guidance.


IMUSE: IMU-based Facial Expression Capture

no code implementations3 Feb 2024 Youjia Wang, Yiwen Wu, Hengan Zhou, Hongyang Lin, Xingyue Peng, Yingwenqi Jiang, Yingsheng Zhu, Guanpeng Long, Yatu Zhang, Jingya Wang, Lan Xu, Jingyi Yu

In this paper, we propose IMUSE to fill the gap, a novel path for facial expression capture using purely IMU signals, significantly distant from previous visual solutions. The key design in our IMUSE is a trilogy.


DressCode: Autoregressively Sewing and Generating Garments from Text Guidance

no code implementations29 Jan 2024 Kai He, Kaixin Yao, Qixuan Zhang, Jingyi Yu, Lingjie Liu, Lan Xu

We first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns with text guidance.

Language Modelling Large Language Model +2

OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers

no code implementations CVPR 2024 Han Liang, Jiacheng Bao, Ruichi Zhang, Sihan Ren, Yuecheng Xu, Sibei Yang, Xin Chen, Jingyi Yu, Lan Xu

At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block.

BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body Dynamics

1 code implementation CVPR 2024 Wenqian Zhang, Molin Huang, Yuxuan Zhou, Juze Zhang, Jingyi Yu, Jingya Wang, Lan Xu

We further provide a strong baseline method, BOTH2Hands, for the novel task: generating vivid two-hand motions from both implicit body dynamics and explicit text prompts.

Motion Synthesis

I'M HOI: Inertia-aware Monocular Capture of 3D Human-Object Interactions

no code implementations CVPR 2024 Chengfeng Zhao, Juze Zhang, Jiashen Du, Ziwei Shan, Junye Wang, Jingyi Yu, Jingya Wang, Lan Xu

In this paper, we present I'm-HOI, a monocular scheme to faithfully capture the 3D motions of both the human and object in a novel setting: using a minimal amount of RGB camera and object-mounted Inertial Measurement Unit (IMU).

Human-Object Interaction Detection Object +1

HandDiffuse: Generative Controllers for Two-Hand Interactions via Diffusion Models

no code implementations8 Dec 2023 Pei Lin, Sihang Xu, Hongdi Yang, Yiran Liu, Xin Chen, Jingya Wang, Jingyi Yu, Lan Xu

We further present a strong baseline method HandDiffuse for the controllable motion generation of interacting hands using various controllers.

Data Augmentation Temporal Sequences

HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting

no code implementations CVPR 2024 Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, Lan Xu

Then, we utilize a 4D Gaussian optimization scheme with adaptive spatial-temporal regularizers to effectively balance the non-rigid prior and Gaussian updating.

GenEM: Physics-Informed Generative Cryo-Electron Microscopy

no code implementations4 Dec 2023 Jiakai Zhang, Qihe Chen, Yan Zeng, Wenyuan Gao, Xuming He, Zhijie Liu, Jingyi Yu

To address this, we introduce physics-informed generative cryo-electron microscopy (GenEM), which for the first time integrates physical-based cryo-EM simulation with a generative unpaired noise translation to generate physically correct synthetic cryo-EM datasets with realistic noises.

Contrastive Learning Pose Estimation +1

VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams

no code implementations CVPR 2024 Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu

In this paper, we introduce VideoRF, the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.

ScalableMap: Scalable Map Learning for Online Long-Range Vectorized HD Map Construction

1 code implementation20 Oct 2023 Jingyi Yu, Zizhao Zhang, Shengfu Xia, Jizhang Sang

We extract more accurate bird's eye view (BEV) features guided by their linear structure, and then propose a hierarchical sparse map representation to further leverage the scalability of vectorized map elements and design a progressive decoding mechanism and a supervision strategy based on this representation.

3D Lane Detection object-detection +1

Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation

no code implementations9 Oct 2023 Ruiyang Liu, Jinxu Xiang, Bowen Zhao, Ran Zhang, Jingyi Yu, Changxi Zheng

To tackle the problem of efficiently editing neural implicit fields, we introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field designated for each tetrahedron within the explicit mesh.

NeuRBF: A Neural Fields Representation with Adaptive Radial Basis Functions

1 code implementation ICCV 2023 Zhang Chen, Zhong Li, Liangchen Song, Lele Chen, Jingyi Yu, Junsong Yuan, Yi Xu

The spatial positions of their neural features are fixed on grid nodes and cannot well adapt to target signals.

Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator

2 code implementations NeurIPS 2023 Hanzhuo Huang, Yufan Feng, Cheng Shi, Lan Xu, Jingyi Yu, Sibei Yang

Text-to-video is a rapidly growing research area that aims to generate a semantic, identical, and temporal coherence sequence of frames that accurately align with the input text prompt.

Text-to-Video Generation Video Generation +1

CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection

no code implementations ICCV 2023 Jiajin Tang, Ge Zheng, Jingyi Yu, Sibei Yang

Its challenge lies in object categories available for the task being too diverse to be limited to a closed set of object vocabulary for traditional object detection.

Object object-detection +2

Human-centric Scene Understanding for 3D Large-scale Scenarios

1 code implementation ICCV 2023 Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, Yuexin Ma

Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc.

Action Recognition Scene Understanding +1

Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction

1 code implementation NeurIPS 2023 Qing Wu, Lixuan Chen, Ce Wang, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang

In this work, we present a novel Polychromatic neural representation (Polyner) to tackle the challenging problem of CT imaging when metallic implants exist within the human body.

Metal Artifact Reduction

MotionGPT: Human Motion as a Foreign Language

2 code implementations NeurIPS 2023 Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen

Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language.

Language Modelling Motion Captioning +2

Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction

no code implementations21 Apr 2023 Binbin Huang, Xingyue Peng, Siyuan Shen, Suan Xia, Ruiqian Li, Yanhua Yu, Yuehan Wang, Shenghua Gao, Wenzheng Chen, Shiying Li, Jingyi Yu

The core of our method is to put the object nearby diffuse walls and augment the LOS scan in the front view with the NLOS scans from the surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the object.


InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions

1 code implementation12 Apr 2023 Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, Lan Xu

Then, we propose a novel representation for motion input in our interaction diffusion model, which explicitly formulates the global relations between the two performers in the world frame.

Denoising Motion Synthesis

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

no code implementations CVPR 2023 Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu, Minye Wu

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.

Decoder Neural Rendering

NeMF: Inverse Volume Rendering with Neural Microflake Field

no code implementations ICCV 2023 Youjia Zhang, Teng Xu, Junqing Yu, Yuteng Ye, Junle Wang, Yanqing Jing, Jingyi Yu, Wei Yang

Recovering the physical attributes of an object's appearance from its images captured under an unknown illumination is challenging yet essential for photo-realistic rendering.

CryoFormer: Continuous Heterogeneous Cryo-EM Reconstruction using Transformer-based Neural Representations

no code implementations28 Mar 2023 Xinhang Liu, Yan Zeng, Yifan Qin, Hao Li, Jiakai Zhang, Lan Xu, Jingyi Yu

Cryo-electron microscopy (cryo-EM) allows for the high-resolution reconstruction of 3D structures of proteins and other biomolecules.


NEPHELE: A Neural Platform for Highly Realistic Cloud Radiance Rendering

no code implementations7 Mar 2023 Haimin Luo, Siyuan Zhang, Fuqiang Zhao, Haotian Jing, Penghao Wang, Zhenxiao Yu, Dongxue Yan, Junran Ding, Boyuan Zhang, Qiang Hu, Shu Yin, Lan Xu, Jingyi Yu

Using such a cloud platform compatible with neural rendering, we further showcase the capabilities of our cloud radiance rendering through a series of applications, ranging from cloud VR/AR rendering.

Neural Rendering

IKOL: Inverse kinematics optimization layer for 3D human pose and shape estimation via Gauss-Newton differentiation

1 code implementation2 Feb 2023 Juze Zhang, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang

This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.

3D human pose and shape estimation regression

Relightable Neural Human Assets from Multi-view Gradient Illuminations

no code implementations CVPR 2023 Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu

UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.

Image Relighting Novel View Synthesis

NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions

no code implementations CVPR 2023 Juze Zhang, Haimin Luo, Hongdi Yang, Xinru Xu, Qianyang Wu, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang

We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects.

Human-Object Interaction Detection

Executing your Commands via Motion Diffusion in Latent Space

1 code implementation CVPR 2023 Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu

We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors.

Motion Synthesis

Weakly Supervised 3D Multi-person Pose Estimation for Large-scale Scenes based on Monocular Camera and Single LiDAR

no code implementations30 Nov 2022 Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.

3D Multi-Person Pose Estimation 3D Pose Estimation +2

Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field

no code implementations23 Oct 2022 Qing Wu, Xin Li, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram.

Human Performance Modeling and Rendering via Neural Animated Mesh

1 code implementation18 Sep 2022 Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.

SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator

no code implementations14 Sep 2022 Zesong Qiu, Yuwei Li, Dongming He, Qixuan Zhang, Longwen Zhang, Yinghao Zhang, Jingya Wang, Lan Xu, Xudong Wang, Yuyao Zhang, Jingyi Yu

Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results.

Computed Tomography (CT)

Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography

1 code implementation12 Sep 2022 Qing Wu, Ruimin Feng, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy that pushes the tomography image reconstruction quality over supervised deep learning CT reconstruction works.

Image Reconstruction

Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects

no code implementations9 Sep 2022 Ziyu Wang, Yu Deng, Jiaolong Yang, Jingyi Yu, Xin Tong

Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e. g., chairs) with large topological variance.

Disentanglement Image Generation +1

Mutual Adaptive Reasoning for Monocular 3D Multi-Person Pose Estimation

no code implementations16 Jul 2022 Juze Zhang, Jingya Wang, Ye Shi, Fei Gao, Lan Xu, Jingyi Yu

This method first uses 2. 5D pose and geometry information to infer camera-centric root depths in a forward pass, and then exploits the root depths to further improve representation learning of 2. 5D pose estimation in a backward pass.

3D Multi-Person Pose Estimation Depth Estimation +2

NARRATE: A Normal Assisted Free-View Portrait Stylizer

no code implementations3 Jul 2022 Youjia Wang, Teng Xu, Yiwen Wu, Minzhang Li, Wenzheng Chen, Lan Xu, Jingyi Yu

We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model.

Face Model Neural Rendering +1

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

no code implementations30 May 2022 Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.

Sensor Fusion Translation

PREF: Phasorial Embedding Fields for Compact Neural Representations

1 code implementation26 May 2022 Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu

We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR

1 code implementation CVPR 2022 Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, Cheng Wang

We propose Human-centered 4D Scene Capture (HSC4D) to accurately and efficiently create a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, and rich interactions between humans and environments.

3D Human Pose Estimation Autonomous Driving

TensoRF: Tensorial Radiance Fields

2 code implementations17 Mar 2022 Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su

We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.

Low-Dose X-Ray Ct Reconstruction Novel View Synthesis

NeReF: Neural Refractive Field for Fluid Surface Reconstruction and Implicit Representation

no code implementations8 Mar 2022 Ziyu Wang, Wei Yang, Junming Cao, Lan Xu, Junqing Yu, Jingyi Yu

We present a novel neural refractive field(NeReF) to recover wavefront of transparent fluids by simultaneously estimating the surface position and normal of the fluid front.

Surface Reconstruction

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

no code implementations CVPR 2022 Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.

NeuVV: Neural Volumetric Videos with Immersive Rendering and Editing

no code implementations12 Feb 2022 Jiakai Zhang, Liao Wang, Xinhang Liu, Fuqiang Zhao, Minzhang Li, Haizhao Dai, Boyuan Zhang, Wei Yang, Lan Xu, Jingyi Yu

We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets so that the aforementioned volumetric video viewing and editing, for the first time, can be conducted immersively in virtual 3D space.

3D Reconstruction

Video-driven Neural Physically-based Facial Asset for Production

no code implementations11 Feb 2022 Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu

In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.

4k motion retargeting +1

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

1 code implementation11 Feb 2022 Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, QIwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu

Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals.

Motion Synthesis

NIMBLE: A Non-rigid Hand Model with Bones and Muscles

no code implementations9 Feb 2022 Yuwei Li, Longwen Zhang, Zesong Qiu, Yingwenqi Jiang, Nianyi Li, Yuexin Ma, Yuyao Zhang, Lan Xu, Jingyi Yu

Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world.

Onsite Non-Line-of-Sight Imaging via Online Calibrations

no code implementations29 Dec 2021 Zhengqing Pan, Ruiqian Li, Tian Gao, Zi Wang, Ping Liu, Siyuan Shen, Tao Wu, Jingyi Yu, Shiying Li

There has been an increasing interest in deploying non-line-of-sight (NLOS) imaging systems for recovering objects behind an obstacle.


HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs

no code implementations CVPR 2022 Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu

The raw HumanNeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings.

An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation

1 code implementation27 Oct 2021 Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images.

Decoder Image Reconstruction +1

Light Field-Based Underwater 3D Reconstruction Via Angular Resampling

no code implementations5 Sep 2021 Yuqi Ding, Zhang Chen, Yu Ji, Jingyi Yu, Jinwei Ye

Recovering 3D geometry of underwater scenes is challenging because of non-linear refraction of light at the water-air interface caused by the camera housing.

3D Reconstruction Depth Estimation

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

no code implementations12 Aug 2021 Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu

To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.

Video Generation

Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions

no code implementations1 Aug 2021 Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu

In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.

4D reconstruction Dynamic Reconstruction +5

Neural Relighting and Expression Transfer On Video Portraits

no code implementations30 Jul 2021 Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu

We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.

Multi-Task Learning Neural Rendering

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

no code implementations14 Jul 2021 Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu

To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.

Neural Rendering

IREM: High-Resolution Magnetic Resonance (MR) Image Reconstruction via Implicit Neural Representation

no code implementations29 Jun 2021 Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang

For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.

Anatomy Image Reconstruction +1

PIANO: A Parametric Hand Bone Model from Magnetic Resonance Imaging

1 code implementation21 Jun 2021 Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu

Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.

Action Understanding

Editable Free-viewpoint Video Using a Layered Neural Representation

1 code implementation30 Apr 2021 Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu

Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.

Disentanglement Scene Parsing +1

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos

1 code implementation23 Apr 2021 Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu

In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.

Action Assessment Attribute +1

MirrorNeRF: One-shot Neural Portrait Radiance Field from Multi-mirror Catadioptric Imaging

no code implementations6 Apr 2021 Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.

Convolutional Neural Opacity Radiance Fields

1 code implementation5 Apr 2021 Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.

Neural Video Portrait Relighting in Real-time via Consistency Modeling

1 code implementation ICCV 2021 Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu

In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.

Decoder Disentanglement +1

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

2 code implementations ICCV 2021 Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su

We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.

Neural Rendering

GNeRF: GAN-based Neural Radiance Field without Posed Camera

1 code implementation ICCV 2021 Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu

We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.

Novel View Synthesis

Non-line-of-Sight Imaging via Neural Transient Fields

1 code implementation2 Jan 2021 Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao, Shiying Li, Jingyi Yu

We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.

LGNN: A Context-aware Line Segment Detector

no code implementations13 Aug 2020 Quan Meng, Jiakai Zhang, Qiang Hu, Xuming He, Jingyi Yu

We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN).

Graph Neural Network Line Segment Detection

SofGAN: A Portrait Image Generator with Dynamic Styling

1 code implementation7 Jul 2020 Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu

To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.

2D Semantic Segmentation Image Generation +1

AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph

1 code implementation27 May 2020 Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng

We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives.

3D Reconstruction Instance Segmentation +1

OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax Layer

1 code implementation20 Apr 2020 Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo

A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.

Classification General Classification

Spatial-Angular Interaction for Light Field Image Super-Resolution

1 code implementation17 Dec 2019 Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo

Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information.

Image Super-Resolution SSIM

A Neural Rendering Framework for Free-Viewpoint Relighting

2 code implementations CVPR 2020 Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu

We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.

Neural Rendering Novel View Synthesis

Deep Coarse-to-fine Dense Light Field Reconstruction with Flexible Sampling and Geometry-aware Fusion

1 code implementation31 Aug 2019 Jing Jin, Junhui Hou, Jie Chen, Huanqiang Zeng, Sam Kwong, Jingyi Yu

Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF.

Computational Efficiency Depth Estimation

Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses

1 code implementation23 Jul 2019 Jing Jin, Junhui Hou, Jie Chen, Sam Kwong, Jingyi Yu

To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input.


Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

1 code implementation30 May 2019 Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao

Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.

Amodal Instance Segmentation Segmentation +1

PIV-Based 3D Fluid Flow Reconstruction Using Light Field Camera

no code implementations15 Apr 2019 Zhong Li, Jinwei Ye, Yu Ji, Hao Sheng, Jingyi Yu

Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing the motion of injected particles.

Depth Estimation Optical Flow Estimation

Non-Lambertian Surface Shape and Reflectance Reconstruction Using Concentric Multi-Spectral Light Field

no code implementations9 Apr 2019 Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu

In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.

Depth Estimation

3D Face Reconstruction Using Color Photometric Stereo with Uncalibrated Near Point Lights

no code implementations4 Apr 2019 Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu

We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.

3D Face Reconstruction Semantic Segmentation

TightCap: 3D Human Shape Capture with Clothing Tightness Field

1 code implementation4 Apr 2019 Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu

In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.

Virtual Try-on

Generic Multiview Visual Tracking

no code implementations4 Apr 2019 Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu

A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.

Camera Calibration Trajectory Prediction +1

Photo-Realistic Facial Details Synthesis from Single Image

1 code implementation ICCV 2019 Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu

Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.

Face Generation

Hair Segmentation on Time-of-Flight RGBD Images

no code implementations7 Mar 2019 Yuanxi Ma, Cen Wang, Shiying Li, Jingyi Yu

Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features.

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Data Compression Image Registration

4D Human Body Correspondences from Panoramic Depth Maps

no code implementations CVPR 2018 Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu

The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.

Saliency Detection in 360° Videos

no code implementations ECCV 2018 Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao

Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.

Video Saliency Detection

Learning to Dodge A Bullet: Concyclic View Morphing via Deep Learning

no code implementations ECCV 2018 Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu

The bullet-time effect, presented in feature film ``The Matrix", has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion.

A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras

no code implementations7 Aug 2018 Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, Jingyi Yu

Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model.

3D Reconstruction

Gaze Prediction in Dynamic 360° Immersive Videos

no code implementations CVPR 2018 Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao

This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.

Gaze Prediction

Automatic 3D Indoor Scene Modeling From Single Panorama

no code implementations CVPR 2018 Yang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang, Jingyi Yu

The recovered layout is then used to guide shape estimation of the remaining objects using their normal information.

object-detection Object Detection +1

Focus Manipulation Detection via Photometric Histogram Analysis

no code implementations CVPR 2018 Can Chen, Scott McCloskey, Jingyi Yu

With the rise of misinformation spread via social media channels, enabled by the increasing automation and realism of image manipulation tools, image forensics is an increasingly relevant problem.

Image Forensics Image Manipulation +1

Semantic See-Through Rendering on Light Fields

no code implementations26 Mar 2018 Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu

We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.

Stereo Matching Stereo Matching Hand

Robust 3D Human Motion Reconstruction Via Dynamic Template Construction

no code implementations31 Jan 2018 Zhong Li, Yu Ji, Wei Yang, Jinwei Ye, Jingyi Yu

In multi-view human body capture systems, the recovered 3D geometry or even the acquired imagery data can be heavily corrupted due to occlusions, noise, limited field of- view, etc.

Sparse Photometric 3D Face Reconstruction Guided by Morphable Models

no code implementations CVPR 2018 Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu

We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.

3D Face Reconstruction Position +1

Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

no code implementations29 Nov 2017 Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu

We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.

Stereo Matching Stereo Matching Hand

Personalized Saliency and its Prediction

1 code implementation9 Oct 2017 Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu

Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.

Saliency Detection

Ray Space Features for Plenoptic Structure-From-Motion

no code implementations ICCV 2017 Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu

In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.

Catadioptric HyperSpectral Light Field Imaging

no code implementations ICCV 2017 Yujia Xue, Kang Zhu, Qiang Fu, Xilin Chen, Jingyi Yu

In this paper, we present a single camera hyperspectral light field imaging solution that we call Snapshot Plenoptic Imager (SPI).

Robust Guided Image Filtering

no code implementations28 Mar 2017 Wei Liu, Xiaogang Chen, Chunhua Shen, Jingyi Yu, Qiang Wu, Jie Yang

In this paper, we propose a general framework for Robust Guided Image Filtering (RGIF), which contains a data term and a smoothness term, to solve the two issues mentioned above.

Rotational Crossed-Slit Light Field

no code implementations CVPR 2016 Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu

In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.

Stereo Matching Stereo Matching Hand

Depth Recovery From Light Field Using Focal Stack Symmetry

no code implementations ICCV 2015 Haiting Lin, Can Chen, Sing Bing Kang, Jingyi Yu

The other is a data consistency measure based on analysis-by-synthesis, i. e., the difference between the synthesized focal stack given the hypothesized depth map and that from the LF.

Scene-adaptive Coded Apertures Imaging

no code implementations19 Jun 2015 Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai

Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations.

Robust High Quality Image Guided Depth Upsampling

no code implementations17 Jun 2015 Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu

A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.

Vocal Bursts Intensity Prediction

Automatic Layer Separation using Light Field Imaging

no code implementations15 Jun 2015 Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu

We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.

Resolving Scale Ambiguity Via XSlit Aspect Ratio Analysis

no code implementations ICCV 2015 Wei Yang, Haiting Lin, Sing Bing Kang, Jingyi Yu

We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error.

3D Reconstruction

A Weighted Sparse Coding Framework for Saliency Detection

no code implementations CVPR 2015 Nianyi Li, Bilin Sun, Jingyi Yu

In this paper, we present a unified saliency detection framework for handling heterogenous types of input data.

Saliency Detection Stereo Matching +2

Ambient Occlusion via Compressive Visibility Estimation

no code implementations CVPR 2015 Wei Yang, Yu Ji, Haiting Lin, Yang Yang, Sing Bing Kang, Jingyi Yu

This enables a sparsity-prior based solution for iteratively recovering the surface normal, the surface albedo, and the visibility function from a small number of images.

Compressive Sensing

Curvilinear Structure Tracking by Low Rank Tensor Approximation with Model Propagation

no code implementations CVPR 2014 Erkang Cheng, Yu Pang, Ying Zhu, Jingyi Yu, Haibin Ling

Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion.

Motion Compensation

Aliasing Detection and Reduction in Plenoptic Imaging

no code implementations CVPR 2014 Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu

When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts.


Image Pre-compensation: Balancing Contrast and Ringing

no code implementations CVPR 2014 Yu Ji, Jinwei Ye, Sing Bing Kang, Jingyi Yu

In particular, we show that linear tone mapping eliminates ringing but incurs severe contrast loss, while non-linear tone mapping functions such as Gamma curves slightly enhances contrast but introduces ringing.

Tone Mapping

Saliency Detection on Light Field

no code implementations CVPR 2014 Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, Jingyi Yu

Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions.

Saliency Detection

Reconstructing Gas Flows Using Light-Path Approximation

no code implementations CVPR 2013 Yu Ji, Jinwei Ye, Jingyi Yu

By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths.

Fast Patch-Based Denoising Using Approximated Patch Geodesic Paths

no code implementations CVPR 2013 Xiaogang Chen, Sing Bing Kang, Jie Yang, Jingyi Yu

PatchGPs treat image patches as nodes and patch differences as edge weights for computing the shortest (geodesic) paths.

Image Denoising

Manhattan Scene Understanding via XSlit Imaging

no code implementations CVPR 2013 Jinwei Ye, Yu Ji, Jingyi Yu

Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP).

Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.