Aligning Artificial Neural Networks to the Brain yields Shallow Recurrent Architectures

ICLR 2019 Jonas KubiliusMartin SchrimpfHa HongNajib J. MajajRishi RajalinghamElias B. IssaKohitij KarPouya BashivanJonathan Prescott-RoyKailyn SchmidtAran NayebiDaniel BearDaniel L. K. YaminsJames J. DiCarlo

Deep artificial neural networks with spatially repeated processing (a.k.a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in the primate ventral visual processing stream. Over the past five years, these ANNs have evolved from a simple feedforward eight-layer architecture in AlexNet to extremely deep and branching NASNet architectures, demonstrating increasingly better object categorization performance... (read more)

PDF Abstract


No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper