Browse > Methodology > Transfer Learning

Transfer Learning

202 papers with code · Methodology

Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.

State-of-the-art leaderboards

Greatest papers with code

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

18 Oct 2016tensorflow/models

The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

TRANSFER LEARNING

Large-scale Simple Question Answering with Memory Networks

5 Jun 2015facebookresearch/ParlAI

Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.

QUESTION ANSWERING TRANSFER LEARNING

Easy Transfer Learning By Exploiting Intra-domain Structures

2 Apr 2019jindongwang/transferlearning

In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.

TRANSFER LEARNING

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

6 Oct 2013jetpacapp/DeepBeliefSDK

We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.

DOMAIN ADAPTATION OBJECT RECOGNITION SCENE RECOGNITION TRANSFER LEARNING

Bag of Tricks for Image Classification with Convolutional Neural Networks

CVPR 2019 dmlc/gluon-cv

Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods.

IMAGE CLASSIFICATION OBJECT DETECTION SEMANTIC SEGMENTATION TRANSFER LEARNING

Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning

7 Jan 2019Tencent/tencent-ml-images

In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.

IMAGE CLASSIFICATION OBJECT DETECTION REPRESENTATION LEARNING SEMANTIC SEGMENTATION TRANSFER LEARNING

Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis

NeurIPS 2018 tensorflow/lingvo

We describe a neural network-based system for text-to-speech (TTS) synthesis that is able to generate speech audio in the voice of many different speakers, including those unseen during training.

SPEAKER VERIFICATION SPEECH SYNTHESIS TEXT-TO-SPEECH SYNTHESIS TRANSFER LEARNING

Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

ICLR 2018 facebookresearch/InferSent

In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.

MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION SEMANTIC TEXTUAL SIMILARITY

Universal Sentence Encoder

29 Mar 2018facebookresearch/InferSent

For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.

SEMANTIC TEXTUAL SIMILARITY SENTENCE EMBEDDINGS SENTIMENT ANALYSIS SUBJECTIVITY ANALYSIS TEXT CLASSIFICATION TRANSFER LEARNING WORD EMBEDDINGS