Paper

kDecay: Just adding k-decay items on Learning-Rate Schedule to improve Neural Networks

Recent work has shown that optimizing the Learning Rate (LR) schedule can be a very accurate and efficient way to train deep neural networks. We observe that the rate of change (ROC) of LR has correlation with the training process, but how to use this relationship to control the training to achieve the purpose of improving accuracy? We propose a new method, k-decay, just add an extra item to the commonly used and easy LR schedule(exp, cosine and polynomial), is effectively improves the performance of these schedule, also better than the state-of-the-art algorithms of LR shcedule such as SGDR, CLR and AutoLRS. In the k-decay, by adjusting the hyper-parameter \(k\), to generate different LR schedule, when k increases, the performance is improved. We evaluate the k-decay method on CIFAR And ImageNet datasets with different neural networks (ResNet, Wide ResNet). Our experiments show that this method can improve on most of them. The accuracy has been improved by 1.08\% on the CIFAR-10 dataset and by 2.07 \% on the CIFAR-100 dataset. On the ImageNet, accuracy is improved by 1.25\%. Our method is not only a general method to be applied other LR Shcedule, but also has no additional computational cost.

Results in Papers With Code
(↓ scroll down to see all results)