Auto-Lambda: Disentangling Dynamic Task Relationships

7 Feb 2022  ·  Shikun Liu, Stephen James, Andrew J. Davison, Edward Johns ·

Understanding the structure of multiple related tasks allows for multi-task learning to improve the generalisation ability of one or all of them. However, it usually requires training each pairwise combination of tasks together in order to capture task relationships, at an extremely high computational cost. In this work, we learn task relationships via an automated weighting framework, named Auto-Lambda. Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training. We apply the proposed framework to both multi-task and auxiliary learning problems in computer vision and robotics, and show that Auto-Lambda achieves state-of-the-art performance, even when compared to optimisation strategies designed specifically for each problem and data domain. Finally, we observe that Auto-Lambda can discover interesting learning behaviors, leading to new insights in multi-task learning. Code is available at https://github.com/lorenmt/auto-lambda.

PDF Abstract

Results from the Paper


Ranked #3 on Robot Manipulation on RLBench (Succ. Rate (10 tasks, 100 demos/task) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robot Manipulation RLBench Auto-λ Succ. Rate (10 tasks, 100 demos/task) 69.3 # 3

Methods


No methods listed for this paper. Add relevant methods here