PhaseCam3D — Learning Phase Masks for Passive Single View Depth Estimation

3DImaging is critical for a myriad of applications such as autonomous driving, robotics, virtual reality, and surveillance. The current state of art relies on active illumination based techniques such as LIDAR, radar, structured illumination or continuous-wave time-of-flight. However, many emerging applications, especially on mobile platforms, are severely power and energy constrained. Active approaches are unlikely to scale well for these applications and hence, there is a pressing need for robust passive 3D imaging technologies. Multi-camera systems provide state of the art performance for passive 3D imaging. In these systems, triangulation between corresponding points on multiple views of the scene allows for 3D estimation. Stereo and multi-view stereo approaches meet some of the needs mentioned above, and an increasing number of mobile platforms have been adopting such technology. Unfortunately, having multiple cameras within a single platform results in increased system cost as well as implementation complexity. The principal goal of this paper is to develop a passive, single-viewpoint 3D imaging system. We exploit the emerging computational imaging paradigm, wherein the optics and the computational algorithm are co-designed to maximize performance within operational constraints.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here