1 code implementation • 15 Jun 2021 • Wilkie Olin-Ammentorp, Maxim Bazhenov
In this work, we extend standard neural networks by building upon an assumption that neuronal activations correspond to the angle of a complex number lying on the unit circle, or 'phasor.'
1 code implementation • 15 Jun 2021 • Wilkie Olin-Ammentorp, Maxim Bazhenov
These include high energy consumption, catastrophic forgetting, dependance on global losses, and an inability to reason symbolically.
no code implementations • 1 Apr 2021 • Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann, Terrence J. Sejnowski, Christopher Kanan
Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences.
no code implementations • 5 Mar 2021 • Wilkie Olin-Ammentorp, Yury Sokolov, Maxim Bazhenov
Reinforcement learning (RL) is a foundation of learning in biological systems and provides a framework to address numerous challenges with real-world artificial intelligence applications.
no code implementations • ICLR 2020 • Timothy Tadros, Giri Krishnan, Ramyaa Ramyaa, Maxim Bazhenov
In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness.
no code implementations • 1 Aug 2019 • Giri P. Krishnan, Timothy Tadros, Ramyaa Ramyaa, Maxim Bazhenov
First, in an incremental learning framework, sleep is able to recover older tasks that were otherwise forgotten in the ANN without sleep phase due to catastrophic forgetting.
1 code implementation • 8 Jun 2017 • Tiger W. Lin, Anup Das, Giri P. Krishnan, Maxim Bazhenov, Terrence J. Sejnowski
In all of our simulated data, the differential covariance-based methods achieved better or similar performance to the GLM method and required fewer data samples.