690 papers with code • 3 benchmarks • 42 datasets
Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
( Image credit: Cross-stitch Networks for Multi-task Learning )
Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency.
Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.
Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning.
Recently, there has been an increasing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments.