22 papers with code • 0 benchmarks • 0 datasets
Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks.
( Image credit: Self-Supervised Generalisation with Meta Auxiliary Learning )
These leaderboards are used to track progress in Auxiliary Learning
The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.
One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.
We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation.
As a data-driven approach, meta-learning requires meta-features that represent the primary learning tasks or datasets, and are estimated traditonally as engineered dataset statistics that require expert domain knowledge tailored for every meta-task.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Motivated by the significant inter-task correlation, we propose a novel weakly supervised multi-task framework termed as AuxSegNet, to leverage saliency detection and multi-label image classification as auxiliary tasks to improve the primary task of semantic segmentation using only image-level ground-truth labels.