Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks
We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their {``}reviews{''} are spaced over time. Our algorithm uses only 34-50{\%} of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. Our code is available at \url{scholar.harvard.edu/hadi/RbF/}.
PDF Abstract