Multi-Task Learning
952 papers with code • 6 benchmarks • 51 datasets
Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
( Image credit: Cross-stitch Networks for Multi-task Learning )
Libraries
Use these libraries to find Multi-Task Learning models and implementationsMost implemented papers
RetinaFace: Single-stage Dense Face Localisation in the Wild
Face Analysis Project on MXNet
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking
Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency.
COVID-CT-Dataset: A CT Scan Dataset about COVID-19
Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.
Language Models are Unsupervised Multitask Learners
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
Towards Real-Time Multi-Object Tracking
In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model.
Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts
In this work, we propose a novel multi-task learning approach, Multi-gate Mixture-of-Experts (MMoE), which explicitly learns to model task relationships from data.
Gradient Surgery for Multi-Task Learning
While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge.
You Only Learn One Representation: Unified Network for Multiple Tasks
In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning.