MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table

25 Apr 2023  ·  YongJae lee, Li Yang, Deliang Fan ·

Neural radiance field (NeRF) has shown remarkable performance in generating photo-realistic novel views. Among recent NeRF related research, the approaches that involve the utilization of explicit structures like grids to manage features achieve exceptionally fast training by reducing the complexity of multilayer perceptron (MLP) networks. However, storing features in dense grids demands a substantial amount of memory space, resulting in a notable memory bottleneck within computer system. Consequently, it leads to a significant increase in training times without prior hyper-parameter tuning. To address this issue, in this work, we are the first to propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality. Specifically, we first design a mixed-feature hash encoding to adaptively mix part of multi-level feature grids and map it to a single hash table. Following that, in order to obtain the correct index of a grid point, we further develop an index transformation method that transforms indices of an arbitrary level grid to those of a canonical grid. Extensive experiments benchmarking with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods