AdaSwarm: A Novel PSO optimization Method for the Mathematical Equivalence of Error Gradients

19 May 2020Rohan MohapatraSnehanshu SahaSoma S. Dhavala

This paper tackles the age-old question of derivative free optimization in neural networks. This paper introduces AdaSwarm, a novel derivative-free optimizer to have similar or better performance to Adam but without "gradients"... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper