Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
#4 best model for Natural Language Inference on WNLI
In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.
This paper explores the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple natural language understanding tasks.
The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model.
Lane detection is an important yet challenging task in autonomous driving, which is affected by many factors, e. g., light conditions, occlusions caused by other vehicles, irrelevant markings on the road and the inherent long and thin property of lanes.
#2 best model for Lane Detection on CULane
We propose a spatio-temporal cache mechanism that enables learning spatial dimension of the input in addition to the hidden states corresponding to the temporal input sequence.