We unify these steps by directly compressing an implicit representation of the scene, a function that maps spatial coordinates to a radiance vector field, which can then be queried to render arbitrary viewpoints.
no code implementations • • Danhang Tang, Saurabh Singh, Philip A. Chou, Christian Haene, Mingsong Dou, Sean Fanello, Jonathan Taylor, Philip Davidson, Onur G. Guleryuz, yinda zhang, Shahram Izadi, Andrea Tagliasacchi, Sofien Bouaziz, Cem Keskin
We describe a novel approach for compressing truncated signed distance fields (TSDF) stored in 3D voxel grids, and their corresponding textures.
Since clusters may have a different numbers of points, each block transform must incorporate the relative importance of each coefficient.
This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes.
In texture-plus-depth representation of a 3D scene, depth maps from different camera viewpoints are typically lossily compressed via the classical transform coding / coefficient quantization paradigm.