PILAE: A Non-gradient Descent Learning Scheme for Deep Feedforward Neural Networks

5 Nov 2018  ·  P. Guo, K. Wang, X. L. Zhou ·

In this work, a non-gradient descent learning (NGDL) scheme was proposed for deep feedforward neural networks (DNN). It is known that an autoencoder can be used as the building blocks of the multi-layer perceptron (MLP) DNN, the MLP is taken as an example to illustrate the proposed scheme of pseudoinverse learning algorithm for autoencoder (PILAE) in this paper. The PILAE with low rank approximation is a NGDL algorithm, and the encoder weight matrix is set to be the low rank approximation of the pseudoinverse of the input matrix, while the decoder weight matrix is calculated by the pseudoinverse learning algorithm. It is worth to note that only very few network structure hyper-parameters need to be tuned compared with classical gradient descent learning algorithm. Hence, the proposed algorithm could be regarded as a quasi-automated training algorithm which could be utilized in automated machine learning field. The experimental results show that the proposed learning scheme for DNN could achieve better performance on considering the tradeoff between training efficiency and classification accuracy.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods