Paper

Efficient Learning of Model Weights via Changing Features During Training

In this paper, we propose a machine learning model, which dynamically changes the features during training. Our main motivation is to update the model in a small content during the training process with replacing less descriptive features to new ones from a large pool. The main benefit is coming from the fact that opposite to the common practice we do not start training a new model from the scratch, but can keep the already learned weights. This procedure allows the scan of a large feature pool which together with keeping the complexity of the model leads to an increase of the model accuracy within the same training time. The efficiency of our approach is demonstrated in several classic machine learning scenarios including linear regression and neural network-based training. As a specific analysis towards signal processing, we have successfully tested our approach on the database MNIST for digit classification considering single pixel and pixel-pairs intensities as possible features.

Results in Papers With Code
(↓ scroll down to see all results)