Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization

ICML 2020  ·  Debabrata Mahapatra, Vaibhav Rajan ·

Multi-Task Learning (MTL) is a well established learning paradigm for jointly learning models for multiple correlated tasks. Often the tasks conflict requiring trade-offs between them during optimization. Recent advances in multi-objective optimization based MTL have enabled us to use large-scale deep networks to find one or more Pareto optimal solutions. However, they cannot be used to find exact Pareto optimal solutions satisfying user-specified preferences with respect to task-specific losses, that is not only a common requirement in applications but also a useful way to explore the infinite set of Pareto optimal solutions. We develop the first gradient-based multi-objective MTL algorithm to address this problem. Our unique approach combines multiple gradient descent with carefully controlled ascent, that enables it to trace the Pareto front in a principled manner and makes it robust to initialization. Assuming only differentiability of the task-specific loss functions, we provide theoretical guarantees for convergence. We empirically demonstrate the superiority of our algorithm over state-of-the-art methods.

PDF ICML 2020 PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here