Transfer Learning

2819 papers with code • 7 benchmarks • 14 datasets

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Libraries

Use these libraries to find Transfer Learning models and implementations

MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning

farinamatteo/multiflow 8 Apr 2024

In this challenging setting, the transferable representations already encoded in the pretrained model are a key aspect to preserve.

5
08 Apr 2024

BatSort: Enhanced Battery Classification with Transfer Learning for Battery Sorting and Recycling

friedrichzhao/singapore_battery_dataset 8 Apr 2024

Battery recycling is a critical process for minimizing environmental harm and resource waste for used batteries.

1
08 Apr 2024

DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology

faceonlive/ai-research 7 Apr 2024

In hematology, computational models offer significant potential to improve diagnostic accuracy, streamline workflows, and reduce the tedious work of analyzing single cells in peripheral blood or bone marrow smears.

140
07 Apr 2024

VoltaVision: A Transfer Learning model for electronic component classification

faceonlive/ai-research ICLR Tiny Papers 2024

In this paper, we analyze the effectiveness of transfer learning on classifying electronic components.

140
05 Apr 2024

How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes

harmonbhasin/curriculum_learning_icl 4 Apr 2024

Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL).

1
04 Apr 2024

What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases

jq-zh/olive-dataset 3 Apr 2024

Vision-language (VL) models, pretrained on colossal image-text datasets, have attained broad VL competence that is difficult to evaluate.

3
03 Apr 2024

ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model

zuela-ai/angofa 3 Apr 2024

In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguistic barriers and facilitate knowledge transfer across diverse languages.

1
03 Apr 2024

R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding

yeliudev/R2-Tuning 2 Apr 2024

Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.

16
02 Apr 2024

Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners

khu-agi/privilege 2 Apr 2024

In this paper, we argue that large models such as vision and language transformers pre-trained on large datasets can be excellent few-shot incremental learners.

8
02 Apr 2024

ImageNot: A contrast with ImageNet preserves model rankings

olawalesalaudeen/imagenot 2 Apr 2024

We introduce ImageNot, a dataset designed to match the scale of ImageNet while differing drastically in other aspects.

2
02 Apr 2024