Browse > Methodology > Transfer Learning > Multi-Task Learning

Multi-Task Learning

78 papers with code · Methodology
Subtask of Transfer Learning

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

State-of-the-art leaderboards

Greatest papers with code

DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks

13 Mar 2017tensorflow/models

In this work, we present a compact, modular framework for constructing novel recurrent neural architectures. Our basic module is a new generic unit, the Transition Based Recurrent Unit (TBRU).

DEPENDENCY PARSING MULTI-TASK LEARNING

Semi-Supervised Sequence Modeling with Cross-View Training

EMNLP 2018 tensorflow/models

We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input.

CCG SUPERTAGGING DEPENDENCY PARSING MACHINE TRANSLATION MULTI-TASK LEARNING NAMED ENTITY RECOGNITION UNSUPERVISED REPRESENTATION LEARNING

One Model To Learn Them All

16 Jun 2017tensorflow/tensor2tensor

We present a single model that yields good results on a number of problems spanning multiple domains. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks.

IMAGE CAPTIONING IMAGE CLASSIFICATION MULTI-TASK LEARNING

Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

ICLR 2018 facebookresearch/InferSent

A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.

MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION SEMANTIC TEXTUAL SIMILARITY

A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks

14 Nov 2018huggingface/hmtl

Much effort has been devoted to evaluate whether multi-task learning can be leveraged to learn rich representations that can be used in various Natural Language Processing (NLP) down-stream applications. The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model.

MULTI-TASK LEARNING NAMED ENTITY RECOGNITION RELATION EXTRACTION

HyperFace: A Deep Multi-task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition

3 Mar 2016takiyu/hyperface

We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features.

FACE DETECTION MULTI-TASK LEARNING POSE ESTIMATION

Decoupled Classification Refinement: Hard False Positive Suppression for Object Detection

5 Oct 2018bowenc0221/Decoupled-Classification-Refinement

In particular, DCR places a separate classification network in parallel with the localization network (base detector). During training, DCR samples hard false positives from the base detector and trains a strong classifier to refine classification results.

MULTI-TASK LEARNING OBJECT DETECTION

Revisiting RCNN: On Awakening the Classification Power of Faster RCNN

ECCV 2018 bowenc0221/Decoupled-Classification-Refinement

Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-of-the-art detectors and observe that most hard false positives result from classification instead of localization.

MULTI-TASK LEARNING

Linguistically-Informed Self-Attention for Semantic Role Labeling

EMNLP 2018 strubell/LISA

Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates. In experiments on CoNLL-2005 SRL, LISA achieves new state-of-the-art performance for a model using predicted predicates and standard word embeddings, attaining 2.5 F1 absolute higher than the previous state-of-the-art on newswire and more than 3.5 F1 on out-of-domain data, nearly 10% reduction in error.

DEPENDENCY PARSING MULTI-TASK LEARNING PART-OF-SPEECH TAGGING PREDICATE DETECTION SEMANTIC ROLE LABELING (PREDICTED PREDICATES) WORD EMBEDDINGS

Towards Viewpoint Invariant 3D Human Pose Estimation

23 Mar 2016mks0601/V2V-PoseNet_RELEASE

We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space.

3D HUMAN POSE ESTIMATION MULTI-TASK LEARNING