Search Results for author: RuiLong Li

Found 11 papers, 9 papers with code

NeRF-XL: Scaling NeRFs with Multiple GPUs

no code implementations24 Apr 2024 RuiLong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams

We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity.

NerfAcc: Efficient Sampling Accelerates NeRFs

no code implementations ICCV 2023 RuiLong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa

Optimizing and rendering Neural Radiance Fields is computationally expensive due to the vast number of samples required by volume rendering.

Nerfstudio: A Modular Framework for Neural Radiance Field Development

2 code implementations8 Feb 2023 Matthew Tancik, Ethan Weber, Evonne Ng, RuiLong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa

Neural Radiance Fields (NeRF) are a rapidly growing area of research with wide-ranging applications in computer vision, graphics, robotics, and more.

Monocular Dynamic View Synthesis: A Reality Check

1 code implementation24 Oct 2022 Hang Gao, RuiLong Li, Shubham Tulsiani, Bryan Russell, Angjoo Kanazawa

We study the recent progress on dynamic view synthesis (DVS) from monocular video.

NerfAcc: A General NeRF Acceleration Toolbox

1 code implementation10 Oct 2022 RuiLong Li, Matthew Tancik, Angjoo Kanazawa

We propose NerfAcc, a toolbox for efficient volumetric rendering of radiance fields.

TAVA: Template-free Animatable Volumetric Actors

1 code implementation17 Jun 2022 RuiLong Li, Julian Tanke, Minh Vo, Michael Zollhofer, Jurgen Gall, Angjoo Kanazawa, Christoph Lassner

Since TAVA does not require a body template, it is applicable to humans as well as other creatures such as animals.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

5 code implementations ICCV 2021 Alex Yu, RuiLong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

Neural Rendering Novel View Synthesis

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++

1 code implementation ICCV 2021 RuiLong Li, Shan Yang, David A. Ross, Angjoo Kanazawa

We present AIST++, a new multi-modal dataset of 3D dance motion and music, along with FACT, a Full-Attention Cross-modal Transformer network for generating 3D dance motion conditioned on music.

Motion Synthesis Pose Estimation

Monocular Real-Time Volumetric Performance Capture

1 code implementation ECCV 2020 Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.

3D Human Shape Estimation

Learning Formation of Physically-Based Face Attributes

1 code implementation CVPR 2020 Ruilong Li, Karl Bladin, Yajie Zhao, Chinmay Chinara, Owen Ingraham, Pengda Xiang, Xinglei Ren, Pratusha Prasad, Bipin Kishore, Jun Xing, Hao Li

Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model, capable of producing multifarious face geometry of pore-level resolution, coupled with material attributes for use in physically-based rendering.

Data Visualization Face Model

Cannot find the paper you are looking for? You can Submit a new open access paper.