Task-Driven Learning of Contour Integration Responses in a V1 Model

Under difficult viewing conditions, the brain's visual system uses a variety of modulatory techniques to augment its core feed-forward signals. Incorporating these into artificial neural networks can potentially improve their robustness. However, before such mechanisms can be recommended, they need to be fully understood. Here, we present a biologically plausible model of one such mechanism, contour integration, embedded in a task-driven artificial neural network. The model is neuroanatomically realistic and all its connections can be mapped onto existing connections in the V1 cortex. We find that the model learns to integrate contours from high-level tasks including those involving natural images. Trained models exhibited several observed neurophysiological and behavioral properties. In contrast, a parameter matched feed-forward control achieved comparable task-level performance but was largely inconsistent with neurophysiological data.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here