A Two-stage Deep Network for High Dynamic Range Image Reconstruction

19 Apr 2021  ·  SMA Sharif, Rizwan Ali Naqvi, Mithun Biswas, Kim Sungjun ·

Mapping a single exposure low dynamic range (LDR) image into a high dynamic range (HDR) is considered among the most strenuous image to image translation tasks due to exposure-related missing information. This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network. Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings. Therefore, we aim to perform image enhancement task like denoising, exposure correction, etc., in the first stage. Additionally, the second stage of our deep network learns tone mapping and bit-expansion from a convex set of data samples. The qualitative and quantitative comparisons demonstrate that the proposed method can outperform the existing LDR to HDR works with a marginal difference. Apart from that, we collected an LDR image dataset incorporating different camera systems. The evaluation with our collected real-world LDR images illustrates that the proposed method can reconstruct plausible HDR images without presenting any visual artefacts. Code available: https://github. com/sharif-apu/twostageHDR_NTIRE21.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Inverse-Tone-Mapping MSU HDR Video Reconstruction Benchmark twostageHDR HDR-PSNR 31.6717 # 8
HDR-VQM 0.1350 # 3
HDR-SSIM 0.9884 # 4

Methods


No methods listed for this paper. Add relevant methods here