An analysis of numerical issues in neural training by pseudoinversion

25 Aug 2015  ·  R. Cancelliere, R. Deluca, M. Gai, P. Gallinari, L. Rubini ·

Some novel strategies have recently been proposed for single hidden layer neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by pseudoinversion. These techniques are gaining popularity in spite of their known numerical issues when singular and/or almost singular matrices are involved. In this paper we discuss a critical use of Singular Value Analysis for identification of these drawbacks and we propose an original use of regularisation to determine the output weights, based on the concept of critical hidden layer size. This approach also allows to limit the training computational effort. Besides, we introduce a novel technique which relies an effective determination of input weights to the hidden layer dimension. This approach is tested for both regression and classification tasks, resulting in a significant performance improvement with respect to alternative methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here