DYNASHARE: DYNAMIC NEURAL NETWORKS FOR MULTI-TASK LEARNING

29 Sep 2021  ·  Golara Javadi, Frederick Tung, Gabriel L. Oliveira ·

Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths. However, previous parameter sharing approaches have relied on a static network structure for each task. In this paper, we propose to increase the capacity for a single network to support multiple tasks by radically increasing the space of possible specialized execution paths. DynaShare is a new approach to deep multi-task learning that learns from the training data a hierarchical gating policy consisting of a task-specific policy for coarse layer selection and gating units for individual input instances, which work together to determine the execution path at inference time. Experimental results on standard multi-task learning benchmark datasets demonstrate the potential of the proposed approach.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here