Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.
The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model.
Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery.
Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.
We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN).
#4 best model for Face Detection on Annotated Faces in the Wild
In particular, DCR places a separate classification network in parallel with the localization network (base detector).
Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates.