Multi-Task Learning

690 papers with code • 3 benchmarks • 42 datasets

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

( Image credit: Cross-stitch Networks for Multi-task Learning )

Libraries

Use these libraries to find Multi-Task Learning models and implementations

Most implemented papers

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking

ifzhang/FairMOT 4 Apr 2020

Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency.

COVID-CT-Dataset: A CT Scan Dataset about COVID-19

UCSD-AI4H/COVID-CT 30 Mar 2020

Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.

Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

yaringal/multi-task-learning-example CVPR 2018

Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.

Towards Real-Time Multi-Object Tracking

Zhongdao/Towards-Realtime-MOT ECCV 2020

In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model.

Language Models are Unsupervised Multitask Learners

PaddlePaddle/PaddleNLP Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

Gradient Surgery for Multi-Task Learning

tianheyu927/PCGrad NeurIPS 2020

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge.

You Only Learn One Representation: Unified Network for Multiple Tasks

WongKinYiu/yolor 10 May 2021

In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning.

Joint CTC-Attention based End-to-End Speech Recognition using Multi-task Learning

PaddlePaddle/PaddleSpeech 21 Sep 2016

Recently, there has been an increasing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments.