no code implementations • 3 May 2020 • Ruthvik Vaila, John Chiasson, Vishal Saxena
Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task.
1 code implementation • 26 Feb 2020 • Ruthvik Vaila, John Chiasson, Vishal Saxena
The effect of stochastic gradient descent (SGD) approximations on learning capabilities of our network are also explored.
no code implementations • 28 Mar 2019 • Ruthvik Vaila, John Chiasson, Vishal Saxena
Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity.