Deblur-NeRF

Introduced by Ma et al. in Deblur-NeRF: Neural Radiance Fields from Blurry Images

This dataset focus on two blur types: camera motion blur and defocus blur. For each type of blur we synthesize $5$ scenes using Blender. We manually place multi-view cameras to mimic real data capture. To render images with camera motion blur, we randomly perturb the camera pose, and then linearly interpolate poses between the original and perturbed poses for each view. We render images from interpolated poses and blend them in linear RGB space to generate the final blurry images. For defocus blur, we use the built-in functionality to render depth-of-field images. We fix the aperture and randomly choose a focus plane between the nearest and furthest depth.

We also captured $20$ real world scenes with $10$ scenes for each blur type for a qualitative study. The camera used was a Canon EOS RP with manual exposure mode. We captured the camera motion blur images by manually shaking the camera during exposure, while the reference images are taken using a tripod. To capture defocus images, we choose a large aperture. We compute the camera poses of blurry and reference images in the real world scenes using the COLMAP.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


License


Modalities


Languages