Transfer Learning
A machine learning technique that leverages knowledge from a model pre-trained on a large dataset to improve performance on a different but related task, especially when labeled data is scarce.
Transfer learning reuses knowledge from one task (source) to improve performance on a related task (target). In computer vision, the standard approach takes a CNN pre-trained on ImageNet and fine-tunes it on domain-specific data, reducing required labeled data and training time.
Compared to training from scratch, transfer learning offers faster convergence, reduced data requirements, and improved generalization - particularly valuable where large labeled datasets are hard to obtain, such as medical imaging.
- Feature extraction: Freeze pre-trained convolutional layers and replace only the final classification head. Works well when target data is extremely limited
- Fine-tuning: Unfreeze all or some layers and continue training with a low learning rate. Achieves higher accuracy when moderate target data is available
- Domain adaptation: Bridges the distribution gap between source and target domains through adversarial training or domain-invariant feature learning
Foundation models like CLIP, DINOv2, and SAM provide representations that generalize across diverse tasks with minimal adaptation. Parameter-efficient methods such as LoRA and adapter layers enable effective transfer while updating only a small fraction of parameters.