DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss

20 Aug 2020  ·  Georgios Albanis, Nikolaos Zioulis, Anastasios Dimou, Dimitrios Zarpalas, Petros Daras ·

In this work we consider UAVs as cooperative agents supporting human users in their operations. In this context, the 3D localisation of the UAV assistant is an important task that can facilitate the exchange of spatial information between the user and the UAV. To address this in a data-driven manner, we design a data synthesis pipeline to create a realistic multimodal dataset that includes both the exocentric user view, and the egocentric UAV view. We then exploit the joint availability of photorealistic and synthesized inputs to train a single-shot monocular pose estimation model. During training we leverage differentiable rendering to supplement a state-of-the-art direct regression objective with a novel smooth silhouette loss. Our results demonstrate its qualitative and quantitative performance gains over traditional silhouette objectives. Our data and code are available at https://vcl3d.github.io/DronePose

PDF Abstract

Datasets


Introduced in the Paper:

UAVA

Used in the Paper:

Matterport3D AirSim Blackbird

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Drone Pose Estimation UAVA DronePose Normalized Position Error 0.012 # 1
Orientation Error 0.059 # 1
Combined Pose Error 0.076 # 1

Methods


No methods listed for this paper. Add relevant methods here