An Evolutionary Algorithm of Linear complexity: Application to Training of Deep Neural Networks

12 Jul 2019  ·  S. Ivvan Valdez, Alfonso Rojas-Domínguez ·

The performance of deep neural networks, such as Deep Belief Networks formed by Restricted Boltzmann Machines (RBMs), strongly depends on their training, which is the process of adjusting their parameters. This process can be posed as an optimization problem over n dimensions. However, typical networks contain tens of thousands of parameters, making this a High-Dimensional Problem (HDP). Although different optimization methods have been employed for this goal, the use of most of the Evolutionary Algorithms (EAs) becomes prohibitive due to their inability to deal with HDPs. For instance, the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) which is regarded as one of the most effective EAs, exhibits the enormous disadvantage of requiring $O(n^2)$ memory and operations, making it unpractical for problems with more than a few hundred variables. In this paper, we introduce a novel EA that requires $O(n)$ operations and memory, but delivers competitive solutions for the training stage of RBMs with over one million variables, when compared against CMA-ES and the Contrastive Divergence algorithm, which is the standard method for training RBMs.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here