no code implementations • 27 Nov 2022 • Albert J. Wakhloo, Tamara J. Sussman, SueYeon Chung
Understanding how the statistical and geometric properties of neural activations relate to network performance is a key problem in theoretical neuroscience and deep learning.
1 code implementation • 5 Feb 2022 • Samuel Lippl, L. F. Abbott, SueYeon Chung
Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.
1 code implementation • NeurIPS 2021 • Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung
Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.
no code implementations • ICLR 2022 • Michelle Miller, SueYeon Chung, Kenneth D. Miller
In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields.
no code implementations • 26 Aug 2021 • Landan Seguin, Anthony Ndirango, Neeli Mishra, SueYeon Chung, Tyler Lee
Motivated by a recent study on learning robustness without input perturbations by distilling an AT model, we explore what is learned during adversarial training by analyzing the distribution of logits in AT models.
1 code implementation • NeurIPS 2021 • David G. Clark, L. F. Abbott, SueYeon Chung
We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.
no code implementations • 1 Jun 2021 • SueYeon Chung
In this thesis, we generalize Gardner's analysis and establish a theory of linear classification of manifolds synthesizing statistical and geometric properties of high dimensional signals.
no code implementations • ICLR 2021 • Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, SueYeon Chung
Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance.
no code implementations • ACL (RepL4NLP) 2021 • Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung
While vector-based language representations from pretrained language models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings.
no code implementations • 14 Apr 2021 • SueYeon Chung, L. F. Abbott
One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i. e., neural population geometry.
no code implementations • 1 Jan 2021 • Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung
Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences.
1 code implementation • NeurIPS 2020 • Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.
1 code implementation • ICML 2020 • Jonathan Mamou, Hang Le, Miguel Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, SueYeon Chung
In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds' radius, dimensionality and inter-manifold correlations.
1 code implementation • NeurIPS 2019 • Cory Stephenson, Jenelle Feather, Suchismita Padhy, Oguz Elibol, Hanlin Tang, Josh Mcdermott, SueYeon Chung
Higher level concepts such as parts-of-speech and context dependence also emerge in the later layers of the network.
no code implementations • 28 May 2019 • Suchismita Padhy, Jenelle Feather, Cory Stephenson, Oguz Elibol, Hanlin Tang, Josh Mcdermott, SueYeon Chung
The success of deep neural networks in visual tasks have motivated recent theoretical and empirical work to understand how these networks operate.
no code implementations • 17 Oct 2017 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.
no code implementations • 28 May 2017 • SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.
no code implementations • 6 Dec 2015 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.