Transfer Learning
2857 papers with code • 7 benchmarks • 15 datasets
Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.
( Image credit: Subodh Malgonde )
Libraries
Use these libraries to find Transfer Learning models and implementationsDatasets
Subtasks
Latest papers with no code
Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking
Large Language Models (LLMs) are becoming crucial across various fields, emphasizing the urgency for high-quality models in underrepresented languages.
Dual Relation Mining Network for Zero-Shot Learning
Specifically, we introduce a Dual Attention Block (DAB) for visual-semantic relationship mining, which enriches visual information by multi-level feature fusion and conducts spatial attention for visual to semantic embedding.
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data
Building upon our insights that mainly later layers are responsible for the drop, we investigate the data-efficiency of fine-tuning a synthetically trained model with real data applied to only those last layers.
Spatial Transfer Learning with Simple MLP
First step to investigate the potential of transfer learning applied to the field of spatial statistics
Stable Diffusion Dataset Generation for Downstream Classification Tasks
Recent advances in generative artificial intelligence have enabled the creation of high-quality synthetic data that closely mimics real-world data.
FedProK: Trustworthy Federated Class-Incremental Learning via Prototypical Feature Knowledge Transfer
Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL).
CNN-LSTM and Transfer Learning Models for Malware Classification based on Opcodes and API Calls
In this paper, we propose a novel model for a malware classification system based on Application Programming Interface (API) calls and opcodes, to improve classification accuracy.
Few-Shot Fruit Segmentation via Transfer Learning
By leveraging pre-trained neural networks, accurate semantic segmentation of fruit in the field is achieved with only a few labeled images.
GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT
The continuous evolution of pre-trained speech models has greatly advanced Speech Emotion Recognition (SER).
TIPAA-SSL: Text Independent Phone-to-Audio Alignment based on Self-Supervised Learning and Knowledge Transfer
In this paper, we present a novel approach for text independent phone-to-audio alignment based on phoneme recognition, representation learning and knowledge transfer.