Browse > Methodology > Semantic Segmentation > Transfer Learning

# Transfer Learning Edit

393 papers with code · Methodology

Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.

( Image credit: Subodh Malgonde )

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

# Universal-RCNN: Universal Object Detector via Transferable Graph R-CNN

18 Feb 2020

Finally, an InterDomain Transfer Module is proposed to exploit diverse transfer dependencies across all domains and enhance the regional feature representation by attending and transferring semantic contexts globally.

# Patient-Specific Finetuning of Deep Learning Models for Adaptive Radiotherapy in Prostate CT

17 Feb 2020

We investigate a transfer learning approach, fine-tuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions.

# SpotTheFake: An Initial Report on a New CNN-Enhanced Platform for Counterfeit Goods Detection

17 Feb 2020

The counterfeit goods trade represents nowadays more than 3. 3% of the whole world trade and thus it's a problem that needs now more than ever a lot of attention and a reliable solution that would reduce the negative impact it has over the modern society.

# Multilinear Compressive Learning with Prior Knowledge

17 Feb 2020

Extensive experiments demonstrate that the proposed knowledge transfer method can effectively train MCL models to compressively sense and synthesize better features for the learning tasks with improved performances, especially when the complexity of the learning task increases.

# The Utility of General Domain Transfer Learning for Medical Language Tasks

16 Feb 2020

The BERT models using either set of pretrained checkpoints outperformed the logistic regression model, achieving sample-weighted average F1-scores of 0. 87 and 0. 87 for the general domain model and the combined general and biomedical-domain model.

# CRL: Class Representative Learning for Image Classification

16 Feb 2020

Second, the inferencing step in CRL is to match between the class representatives and new data.

# Automatic lesion segmentation and Pathological Myopia classification in fundus images

15 Feb 2020

In this paper we present algorithms to diagnosis Pathological Myopia (PM) and detection of retinal structures and lesions such asOptic Disc (OD), Fovea, Atrophy and Detachment.

# Machines Learn Appearance Bias in Face Recognition

13 Feb 2020

In particular, we find that our model is better at judging a person's dominance based on their face than other traits like trustworthiness or likeability, even for emotionally neutral faces.

# Weighted Empirical Risk Minimization: Sample Selection Bias Correction based on Importance Sampling

12 Feb 2020

We consider statistical learning problems, when the distribution $P'$ of the training observations $Z'_1,\; \ldots,\; Z'_n$ differs from the distribution $P$ involved in the risk one seeks to minimize (referred to as the \textit{test distribution}) but is still defined on the same measurable space as $P$ and dominates it.

# x-vectors meet emotions: A study on dependencies between emotion and speaker recognition

12 Feb 2020

Then, we show the effect of emotion on speaker recognition.