Search Results for author: Minye Wu

Found 25 papers, 9 papers with code

Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis

no code implementations19 Apr 2024 Georgios Kouros, Minye Wu, Sushruth Nagesh, Xianling Zhang, Tinne Tuytelaars

Inverse rendering aims to reconstruct the scene properties of objects solely from multiview images.

TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint Video

no code implementations10 Dec 2023 Minye Wu, Zehao Wang, Georgios Kouros, Tinne Tuytelaars

Neural Radiance Fields (NeRF) revolutionize the realm of visual media by providing photorealistic Free-Viewpoint Video (FVV) experiences, offering viewers unparalleled immersion and interactivity.

NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences

no code implementations10 Dec 2023 Minye Wu, Tinne Tuytelaars

Our extensive experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.

Continual Learning Novel View Synthesis

VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams

no code implementations3 Dec 2023 Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu

In this paper, we introduce VideoRF, the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.

Ref-DVGO: Reflection-Aware Direct Voxel Grid Optimization for an Improved Quality-Efficiency Trade-Off in Reflective Scene Reconstruction

1 code implementation16 Aug 2023 Georgios Kouros, Minye Wu, Shubham Shrivastava, Sushruth Nagesh, Punarjay Chakravarty, Tinne Tuytelaars

To this end, we investigate an implicit-explicit approach based on conventional volume rendering to enhance the reconstruction quality and accelerate the training and rendering processes.

Novel View Synthesis

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

no code implementations CVPR 2023 Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu, Minye Wu

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.

Neural Rendering

Human Performance Modeling and Rendering via Neural Animated Mesh

1 code implementation18 Sep 2022 Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.

Find a Way Forward: a Language-Guided Semantic Map Navigator

no code implementations7 Mar 2022 Zehao Wang, Mingxiao Li, Minye Wu, Marie-Francine Moens, Tinne Tuytelaars

In this paper, we introduce the map-language navigation task where an agent executes natural language instructions and moves to the target position based only on a given 3D semantic map.

Imitation Learning

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

no code implementations CVPR 2022 Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

no code implementations12 Aug 2021 Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu

To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.

Video Generation

Neural Relighting and Expression Transfer On Video Portraits

no code implementations30 Jul 2021 Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu

We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.

Multi-Task Learning Neural Rendering

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

no code implementations14 Jul 2021 Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu

To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.

Neural Rendering

PIANO: A Parametric Hand Bone Model from Magnetic Resonance Imaging

1 code implementation21 Jun 2021 Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu

Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.

Action Understanding

Editable Free-viewpoint Video Using a Layered Neural Representation

1 code implementation30 Apr 2021 Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu

Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.

Disentanglement Scene Parsing +1

MirrorNeRF: One-shot Neural Portrait Radiance Field from Multi-mirror Catadioptric Imaging

no code implementations6 Apr 2021 Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.

Convolutional Neural Opacity Radiance Fields

1 code implementation5 Apr 2021 Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.

Neural Video Portrait Relighting in Real-time via Consistency Modeling

1 code implementation ICCV 2021 Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu

In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.

Disentanglement Single-Image Portrait Relighting

GNeRF: GAN-based Neural Radiance Field without Posed Camera

1 code implementation ICCV 2021 Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu

We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.

Novel View Synthesis

NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering using RGB Cameras

no code implementations CVPR 2021 Xin Suo, Yuheng Jiang, Pei Lin, Yingliang Zhang, Kaiwen Guo, Minye Wu, Lan Xu

4D reconstruction and rendering of human activities is critical for immersive VR/AR experience. Recent advances still fail to recover fine geometry and texture results with the level of detail present in the input images from sparse multi-view RGB cameras.

4D reconstruction Multi-Task Learning

ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References

2 code implementations CVPR 2021 Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu

We propose a hybrid motion inference stage with a generation network, which utilizes a temporal encoder-decoder to extract the motion details from the pair-wise sparse-view reference, as well as a motion discriminator to utilize the unpaired marker-based references to extract specific challenging motion characteristics in a data-driven manner.

Multi-View Neural Human Rendering

1 code implementation CVPR 2020 Minye Wu, Yuehao Wang, Qiang Hu, Jingyi Yu

We present an end-to-end Neural Human Renderer (NHR) for dynamic human captures under the multi-view setting.

Generic Multiview Visual Tracking

no code implementations4 Apr 2019 Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu

A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.

Camera Calibration Trajectory Prediction +1

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Data Compression Image Registration

4D Human Body Correspondences from Panoramic Depth Maps

no code implementations CVPR 2018 Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu

The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.

Cannot find the paper you are looking for? You can Submit a new open access paper.