Paper

Winning the lottery with neural connectivity constraints: faster learning across cognitive tasks with spatially constrained sparse RNNs

Recurrent neural networks (RNNs) are often used to model circuits in the brain, and can solve a variety of difficult computational problems requiring memory, error-correction, or selection [Hopfield, 1982, Maass et al., 2002, Maass, 2011]. However, fully-connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (~0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that utilize task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks [Yang et al., 2019]. We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multi-task battery, Mod-Cog, consisting of upto 132 tasks that expands by 7-fold the number of tasks and task-complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task-set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data-efficiency than fully connected networks.

Results in Papers With Code
(↓ scroll down to see all results)