Paper

An analysis of numerical issues in neural training by pseudoinversion

Some novel strategies have recently been proposed for single hidden layer neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by pseudoinversion. These techniques are gaining popularity in spite of their known numerical issues when singular and/or almost singular matrices are involved. In this paper we discuss a critical use of Singular Value Analysis for identification of these drawbacks and we propose an original use of regularisation to determine the output weights, based on the concept of critical hidden layer size. This approach also allows to limit the training computational effort. Besides, we introduce a novel technique which relies an effective determination of input weights to the hidden layer dimension. This approach is tested for both regression and classification tasks, resulting in a significant performance improvement with respect to alternative methods.

Results in Papers With Code
(↓ scroll down to see all results)