JA EN

Regularization

A family of techniques that constrain model complexity to prevent overfitting and improve generalization. Weight decay and dropout are the most common examples.

Regularization encompasses techniques that constrain learning to prevent overfitting. Approaches include penalty terms in the loss function, stochastic network perturbations, and data manipulation. Multiple methods are typically combined in practice.

Standard image recognition training combines several techniques. ResNet uses L2 regularization (weight decay = 0.0001) with data augmentation. EfficientNet adds dropout and Stochastic Depth, each contributing complementary effects.

Recent research positions data augmentation as powerful regularization. Mixup (linear image interpolation), CutMix (patch replacement), and RandAugment (automated search) demonstrate strong effects complementing traditional weight penalties.

Related Terms

Related Articles