Transfer Learning
2819 papers with code • 7 benchmarks • 14 datasets
Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.
( Image credit: Subodh Malgonde )
Libraries
Use these libraries to find Transfer Learning models and implementationsDatasets
Subtasks
Latest papers
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
In this challenging setting, the transferable representations already encoded in the pretrained model are a key aspect to preserve.
BatSort: Enhanced Battery Classification with Transfer Learning for Battery Sorting and Recycling
Battery recycling is a critical process for minimizing environmental harm and resource waste for used batteries.
DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology
In hematology, computational models offer significant potential to improve diagnostic accuracy, streamline workflows, and reduce the tedious work of analyzing single cells in peripheral blood or bone marrow smears.
VoltaVision: A Transfer Learning model for electronic component classification
In this paper, we analyze the effectiveness of transfer learning on classifying electronic components.
How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes
Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL).
What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases
Vision-language (VL) models, pretrained on colossal image-text datasets, have attained broad VL competence that is difficult to evaluate.
ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguistic barriers and facilitate knowledge transfer across diverse languages.
R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding
Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.
Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners
In this paper, we argue that large models such as vision and language transformers pre-trained on large datasets can be excellent few-shot incremental learners.
ImageNot: A contrast with ImageNet preserves model rankings
We introduce ImageNot, a dataset designed to match the scale of ImageNet while differing drastically in other aspects.