Paper

Improving Image-Based Localization with Deep Learning: The Impact of the Loss Function

This work investigates the impact of the loss function on the performance of Neural Networks, in the context of a monocular, RGB-only, image localization task. A common technique used when regressing a camera's pose from an image is to formulate the loss as a linear combination of positional and rotational mean squared error (using tuned hyperparameters as coefficients). In this work we observe that changes to rotation and position mutually affect the captured image, and in order to improve performance, a pose regression network's loss function should include a term which combines the error of both of these coupled quantities. Based on task specific observations and experimental tuning, we present said loss term, and create a new model by appending this loss term to the loss function of the pre-existing pose regression network `PoseNet'. We achieve improvements in the localization accuracy of the network for indoor scenes; with decreases of up to 26.7% and 24.0% in the median positional and rotational error respectively, when compared to the default PoseNet.

Results in Papers With Code
(↓ scroll down to see all results)