Multi-Task Learning

1063 papers with code • 6 benchmarks • 54 datasets

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

( Image credit: Cross-stitch Networks for Multi-task Learning )

Libraries

Use these libraries to find Multi-Task Learning models and implementations

Most implemented papers

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking

ifzhang/FairMOT 4 Apr 2020

Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency.

COVID-CT-Dataset: A CT Scan Dataset about COVID-19

UCSD-AI4H/COVID-CT 30 Mar 2020

Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.

Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

yaringal/multi-task-learning-example CVPR 2018

Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.

Language Models are Unsupervised Multitask Learners

openai/gpt-2 Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

Towards Real-Time Multi-Object Tracking

Zhongdao/Towards-Realtime-MOT ECCV 2020

In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model.

Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts

shenweichen/DeepCTR 19 Jul 2018

In this work, we propose a novel multi-task learning approach, Multi-gate Mixture-of-Experts (MMoE), which explicitly learns to model task relationships from data.

Measuring Massive Multitask Language Understanding

hendrycks/test 7 Sep 2020

By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.

Gradient Surgery for Multi-Task Learning

tianheyu927/PCGrad NeurIPS 2020

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge.