Multi-Task Learning
1098 papers with code • 6 benchmarks • 55 datasets
Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
( Image credit: Cross-stitch Networks for Multi-task Learning )
Libraries
Use these libraries to find Multi-Task Learning models and implementationsMost implemented papers
You Only Learn One Representation: Unified Network for Multiple Tasks
In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning.
Joint CTC-Attention based End-to-End Speech Recognition using Multi-task Learning
Recently, there has been an increasing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments.
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors.
PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification Using Highly Randomized Synthetic Data
In comparison with person re-identification (ReID), which has been widely studied in the research community, vehicle ReID has received less attention.
Two-Stream Convolutional Networks for Action Recognition in Videos
Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art.
Revisiting RCNN: On Awakening the Classification Power of Faster RCNN
Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks.
LEAF: A Benchmark for Federated Settings
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
Multi-Task Learning as Multi-Objective Optimization
These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks.
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task.
A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC).