Paper

Depth Reconstruction of Translucent Objects from a Single Time-of-Flight Camera using Deep Residual Networks

We propose a novel approach to recovering the translucent objects from a single time-of-flight (ToF) depth camera using deep residual networks. When recording the translucent objects using the ToF depth camera, their depth values are severely contaminated due to complex light interactions with the surrounding environment. While existing methods suggested new capture systems or developed the depth distortion models, their solutions were less practical because of strict assumptions or heavy computational complexity. In this paper, we adopt the deep residual networks for modeling the ToF depth distortion caused by translucency. To fully utilize both the local and semantic information of objects, multi-scale patches are used to predict the depth value. Based on the quantitative and qualitative evaluation on our benchmark database, we show the effectiveness and robustness of the proposed algorithm.

Results in Papers With Code
(↓ scroll down to see all results)