Search Results for author: Maxim Bazhenov

Found 7 papers, 3 papers with code

Deep Phasor Networks: Connecting Conventional and Spiking Neural Networks

1 code implementation15 Jun 2021 Wilkie Olin-Ammentorp, Maxim Bazhenov

In this work, we extend standard neural networks by building upon an assumption that neuronal activations correspond to the angle of a complex number lying on the unit circle, or 'phasor.'

Bridge Networks: Relating Inputs through Vector-Symbolic Manipulations

1 code implementation15 Jun 2021 Wilkie Olin-Ammentorp, Maxim Bazhenov

These include high energy consumption, catastrophic forgetting, dependance on global losses, and an inability to reason symbolically.

Replay in Deep Learning: Current Approaches and Missing Biological Elements

no code implementations1 Apr 2021 Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann, Terrence J. Sejnowski, Christopher Kanan

Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences.

Retrieval

A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms

no code implementations5 Mar 2021 Wilkie Olin-Ammentorp, Yury Sokolov, Maxim Bazhenov

Reinforcement learning (RL) is a foundation of learning in biological systems and provides a framework to address numerous challenges with real-world artificial intelligence applications.

Decision Making reinforcement-learning +1

Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks

no code implementations ICLR 2020 Timothy Tadros, Giri Krishnan, Ramyaa Ramyaa, Maxim Bazhenov

In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness.

Adversarial Robustness General Classification +2

Biologically inspired sleep algorithm for artificial neural networks

no code implementations1 Aug 2019 Giri P. Krishnan, Timothy Tadros, Ramyaa Ramyaa, Maxim Bazhenov

First, in an incremental learning framework, sleep is able to recover older tasks that were otherwise forgotten in the ANN without sleep phase due to catastrophic forgetting.

Incremental Learning Transfer Learning

Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings

1 code implementation8 Jun 2017 Tiger W. Lin, Anup Das, Giri P. Krishnan, Maxim Bazhenov, Terrence J. Sejnowski

In all of our simulated data, the differential covariance-based methods achieved better or similar performance to the GLM method and required fewer data samples.

Connectivity Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.