Deep HDR Imaging via A Non-Local Network

One of the most challenging problems in reconstructing a high dynamic range (HDR) image from multiple low dynamic range (LDR) inputs is the ghosting artifacts caused by the object motion across different inputs. When the object motion is slight, most existing methods can well suppress the ghosting artifacts through aligning LDR inputs based on optical flow or detecting anomalies among them. However, they often fail to produce satisfactory results in practice, since the real object motion can be very large. In this study, we present a novel deep framework, termed NHDRRnet, which adopts an alternative direction and attempts to remove ghosting artifacts by exploiting the non-local correlation in inputs. In NHDRRnet, we first adopt an Unet architecture to fuse all inputs and map the fusion results into a low-dimensional deep feature space. Then, we feed the resultant features into a novel global non-local module which reconstructs each pixel by weighted averaging all the other pixels using the weights determined by their correspondences. By doing this, the proposed NHDRRnet is able to adaptively select the useful information (e.g., which are not corrupted by large motions or adverse lighting conditions) in the whole deep feature space to accurately reconstruct each pixel. In addition, we also incorporate a triple-pass residual module to capture more powerful local features, which proves to be effective in further boosting the performance. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed NDHRnet in terms of suppressing the ghosting artifacts in HDR reconstruction, especially when the objects have large motions.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here