ESPADA (Extended Synthetic and Photogrammetric Aerial-Image Dataset)

We present a new aerial image dataset, named ESPADA, intended for the training of deep neural networks for depth image estimation from a single aerial image. Given the difficulty of creating aerial image datasets containing image pairs of chromatic images related to their depth images, simulators such as AirSim have been proposed to generate synthetic images from photorealistic scenes. The latter enables the generation of thousands of images that can be used to train and evaluate neural models. However, we argue that synthetic photorealistic aerial image datasets can be improved by adding images generated from photogrammetric models imported into the simulator, thus enabling a less artificial generation of both chromatic and depth images. To assess the quality of these images, we compare the performance of 4 deep neural networks whose pre-trained models and code for re-training are publicly available. We also use ORB-SLAM, in its RGB-D version, to indirectly assess the estimated depth image. To accomplish this, chromatic images from 3 aerial videos and their depth images, estimated with the networks trained with ESPADA, are fed into ORB-SLAM. The estimated camera pose is compared against the trajectory retrieved from the GPS flight trajectory. Our results indicate that images generated from photogrammetric models improve the performance of depth estimation from a single aerial image.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


License


  • Unknown

Modalities


Languages