Scene-adaptive Coded Apertures Imaging

19 Jun 2015  ·  Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai ·

Coded aperture imaging systems have recently shown great success in recovering scene depth and extending the depth-of-field. The ideal pattern, however, would have to serve two conflicting purposes: 1) be broadband to ensure robust deconvolution and 2) has sufficient zero-crossings for a high depth discrepancy. This paper presents a simple but effective scene-adaptive coded aperture solution to bridge this gap. We observe that the geometric structures in a natural scene often exhibit only a few edge directions, and the successive frames are closely correlated. Therefore we adopt a spatial partitioning and temporal propagation scheme. In each frame, we address one principal direction by applying depth-discriminative codes along it and broadband codes along its orthogonal direction. Since within a frame only the regions with edge direction corresponding to its aperture code behaves well, we utilize the close among-frame correlation to propagate the high quality single frame results temporally to obtain high performance over the whole image lattice. To physically implement this scheme, we use a Liquid Crystal on Silicon (LCoS) microdisplay that permits fast changing pattern codes. Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations. Secondly, we sequentially apply the proposed coding scheme with these orientations in the following frames. Experiments on both synthetic and real scenes show that our technique is able to combine advantages of the state-of-the-art patterns for recovering better quality depth map and all-focus images.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here