no code implementations • 17 Nov 2023 • James Hazelden, Yuhan Helena Liu, Eli Shlizerman, Eric Shea-Brown
Training networks consisting of biophysically accurate neuron models could allow for new insights into how brain circuits can organize and solve tasks.
no code implementations • 12 Oct 2023 • Yuhan Helena Liu, Aristide Baratin, Jonathan Cornford, Stefan Mihalas, Eric Shea-Brown, Guillaume Lajoie
Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks.
1 code implementation • 2 Jun 2022 • Yuhan Helena Liu, Stephen Smith, Stefan Mihalas, Eric Shea-Brown, Uygar Sümbül
Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
1 code implementation • 2 Jun 2022 • Yuhan Helena Liu, Arna Ghosh, Blake A. Richards, Eric Shea-Brown, Guillaume Lajoie
We first demonstrate that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely.