Transfer Learning

1803 papers with code • 8 benchmarks • 11 datasets

Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.

( Image credit: Subodh Malgonde )

Libraries

Use these libraries to find Transfer Learning models and implementations

Most implemented papers

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

tensorflow/tpu ICML 2019

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.

Universal Language Model Fine-tuning for Text Classification

fastai/fastai ACL 2018

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

UKPLab/sentence-transformers IJCNLP 2019

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

High Quality Monocular Depth Estimation via Transfer Learning

ialhashim/DenseDepth 31 Dec 2018

Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

google-research/text-to-text-transfer-transformer arXiv 2019

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

ResNeSt: Split-Attention Networks

zhanghang1989/ResNeSt 19 Apr 2020

It is well known that featuremap attention and multi-path representation are important for visual recognition.

Bag of Tricks for Image Classification with Convolutional Neural Networks

dmlc/gluon-cv CVPR 2019

Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods.

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

huggingface/transformers NeurIPS 2019

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

Supervised Learning of Universal Sentence Representations from Natural Language Inference Data

facebookresearch/InferSent EMNLP 2017

Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features.

Universal Sentence Encoder

facebookresearch/InferSent 29 Mar 2018

For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.