Search Results for author: Ben Sorscher

Found 4 papers, 3 papers with code

Beyond neural scaling laws: beating power law scaling via data pruning

3 code implementations29 Jun 2022 Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos

Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning.

Benchmarking

A theory of learning with constrained weight-distribution

no code implementations14 Jun 2022 Weishun Zhong, Ben Sorscher, Daniel D Lee, Haim Sompolinsky

Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution.

Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks

1 code implementation NeurIPS 2021 Aran Nayebi, Alexander Attinger, Malcolm Campbell, Kiah Hardcastle, Isabel Low, Caitlin Mallory, Gabriel Mel, Ben Sorscher, Alex Williams, Surya Ganguli, Lisa Giocomo, Dan Yamins

Medial entorhinal cortex (MEC) supports a wide range of navigational and memory related behaviors. Well-known experimental results have revealed specialized cell types in MEC --- e. g. grid, border, and head-direction cells --- whose highly stereotypical response profiles are suggestive of the role they might play in supporting MEC functionality.

A unified theory for the origin of grid cells through the lens of pattern formation

1 code implementation NeurIPS 2019 Ben Sorscher, Gabriel Mel, Surya Ganguli, Samuel Ocko

This theory provides insight into the optimal solutions of diverse formulations of the normative task, and shows that symmetries in the representation of space correctly predict the structure of learned firing fields in trained neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.