Regularization

Early Stopping

Early Stopping is a regularization technique for deep neural networks that stops training when parameter updates no longer begin to yield improves on a validation set. In essence, we store and update the current best parameters during training, and when parameter updates no longer yield an improvement (after a set number of iterations) we stop training and use the last best parameters. It works as a regularizer by restricting the optimization procedure to a smaller volume of parameter space.

Image Source: Ramazan Gençay

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 39 9.49%
Test 21 5.11%
Conditional Image Generation 16 3.89%
Image Classification 13 3.16%
General Classification 11 2.68%
Denoising 10 2.43%
Reinforcement Learning (RL) 9 2.19%
Clustering 8 1.95%
Benchmarking 7 1.70%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories