Browse > Methodology > Transfer Learning

Transfer Learning

311 papers with code · Methodology

Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.

State-of-the-art leaderboards

Latest papers without code

Pay Attention to Features, Transfer Learn faster CNNs

ICLR 2020

Deep convolutional neural networks are now widely deployed in vision applications, but the size of training data can bottleneck their performance.

TRANSFER LEARNING

WHAT DATA IS USEFUL FOR MY DATA: TRANSFER LEARNING WITH A MIXTURE OF SELF-SUPERVISED EXPERTS

ICLR 2020

We assume that a client, a target application with its own small labeled dataset, is only interested in fetching a subset of the server’s data that is most relevant to its own target domain.

IMAGE CLASSIFICATION INSTANCE SEGMENTATION OBJECT DETECTION SEMANTIC SEGMENTATION TRANSFER LEARNING

FINBERT: FINANCIAL SENTIMENT ANALYSIS WITH PRE-TRAINED LANGUAGE MODELS

ICLR 2020

While many sentiment classification solutions report high accuracy scores in product or movie review datasets, the performance of the methods in niche domains such as finance still largely falls behind.

LANGUAGE MODELLING SENTIMENT ANALYSIS TRANSFER LEARNING

Winning the Lottery with Continuous Sparsification

ICLR 2020

The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts.

TRANSFER LEARNING

Reweighted Proximal Pruning for Large-Scale Language Representation

ICLR 2020

Is it possible to compress these large-scale language representation models?

TRANSFER LEARNING

Compositional Continual Language Learning

ICLR 2020

Experimental results show that the proposed method has significant improvement over state of the art methods, and it enables knowledge transfer and prevents catastrophic forgetting, resulting in more than 85% accuracy up to 100 stages, compared with less 50% accuracy for baselines.

MACHINE TRANSLATION TRANSFER LEARNING

Multi-source Multi-view Transfer Learning in Neural Topic Modeling with Pretrained Topic and Word Embeddings

ICLR 2020

Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents.

INFORMATION RETRIEVAL TRANSFER LEARNING WORD EMBEDDINGS

Knowledge Transfer via Student-Teacher Collaboration

ICLR 2020

One way to compress these heavy models is knowledge transfer (KT), in which a light student network is trained through absorbing the knowledge from a powerful teacher network.

TRANSFER LEARNING

Robust Few-Shot Learning with Adversarially Queried Meta-Learners

ICLR 2020

Previous work on adversarially robust neural networks requires large training sets and computationally expensive training procedures.

FEW-SHOT LEARNING META-LEARNING TRANSFER LEARNING

Learning to Rank Learning Curves

ICLR 2020

Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations.

LEARNING-TO-RANK NEURAL ARCHITECTURE SEARCH TRANSFER LEARNING