We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution.
We first demonstrate that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely.
Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
Several groups have developed metrics that provide a quantitative comparison between representations computed by networks and representations measured in cortex.
What determines the dimensionality of activity in neural circuits?
Computational neuroscience aims to fit reliable models of in vivo neural activity and interpret them as abstract computations.
Datasets such as images, text, or movies are embedded in high-dimensional spaces.
We demonstrate the efficacy of a low rank version on visual cortex data and discuss the possibility of extending this to a whole-brain connectivity matrix at the voxel scale.
We analyze three SDE models that have been proposed as approximations to the Markov chain model: one that describes the states of the ion channels and two that describe the states of the ion channel subunits.
Neurons and Cognition Quantitative Methods