Improving Image-Based Localization with Deep Learning: The Impact of the Loss Function

28 Apr 2019  ·  Isaac Ronald Ward, M. A. Asim K. Jalwana, Mohammed Bennamoun ·

This work investigates the impact of the loss function on the performance of Neural Networks, in the context of a monocular, RGB-only, image localization task. A common technique used when regressing a camera's pose from an image is to formulate the loss as a linear combination of positional and rotational mean squared error (using tuned hyperparameters as coefficients). In this work we observe that changes to rotation and position mutually affect the captured image, and in order to improve performance, a pose regression network's loss function should include a term which combines the error of both of these coupled quantities. Based on task specific observations and experimental tuning, we present said loss term, and create a new model by appending this loss term to the loss function of the pre-existing pose regression network `PoseNet'. We achieve improvements in the localization accuracy of the network for indoor scenes; with decreases of up to 26.7% and 24.0% in the median positional and rotational error respectively, when compared to the default PoseNet.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here