1 code implementation • NeurIPS 2020 • Isabella Pozzi, Sander Bohte, Pieter Roelfsema
We show how the new learning scheme – Attention-Gated Brain Propagation (BrainProp) – is mathematically equivalent to error backpropagation, for one output unit at a time.
1 code implementation • 5 Nov 2018 • Isabella Pozzi, Sander Bohté, Pieter Roelfsema
Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain.
no code implementations • NeurIPS 2012 • Jaldert Rombouts, Pieter Roelfsema, Sander M. Bohte
Neurons in association cortex play an important role in this process: during learning these neurons become tuned to relevant features and represent the information that is required later as a persistent elevation of their activity.