Paper

Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial... (read more)

Results in Papers With Code
(↓ scroll down to see all results)