Search Results for author: SueYeon Chung

Found 18 papers, 6 papers with code

Linear Classification of Neural Manifolds with Correlated Variability

no code implementations27 Nov 2022 Albert J. Wakhloo, Tamara J. Sussman, SueYeon Chung

Understanding how the statistical and geometric properties of neural activations relate to network performance is a key problem in theoretical neuroscience and deep learning.

Classification

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

1 code implementation5 Feb 2022 Samuel Lippl, L. F. Abbott, SueYeon Chung

Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.

Inductive Bias

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

1 code implementation NeurIPS 2021 Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.

Adversarial Robustness

Divisive Feature Normalization Improves Image Recognition Performance in AlexNet

no code implementations ICLR 2022 Michelle Miller, SueYeon Chung, Kenneth D. Miller

In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields.

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

no code implementations26 Aug 2021 Landan Seguin, Anthony Ndirango, Neeli Mishra, SueYeon Chung, Tyler Lee

Motivated by a recent study on learning robustness without input perturbations by distilling an AT model, we explore what is learned during adversarial training by analyzing the distribution of logits in AT models.

Adversarial Robustness

Credit Assignment Through Broadcasting a Global Error Vector

1 code implementation NeurIPS 2021 David G. Clark, L. F. Abbott, SueYeon Chung

We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.

Statistical Mechanics of Neural Processing of Object Manifolds

no code implementations1 Jun 2021 SueYeon Chung

In this thesis, we generalize Gardner's analysis and establish a theory of linear classification of manifolds synthesizing statistical and geometric properties of high dimensional signals.

Object Recognition

On the geometry of generalization and memorization in deep neural networks

no code implementations ICLR 2021 Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, SueYeon Chung

Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance.

Memorization

Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models

no code implementations ACL (RepL4NLP) 2021 Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung

While vector-based language representations from pretrained language models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings.

Pretrained Language Models

Neural population geometry: An approach for understanding biological and artificial neural networks

no code implementations14 Apr 2021 SueYeon Chung, L. F. Abbott

One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i. e., neural population geometry.

BIG-bench Machine Learning Disentanglement

Representational correlates of hierarchical phrase structure in deep language models

no code implementations1 Jan 2021 Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung

Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences.

On 1/n neural representation and robustness

1 code implementation NeurIPS 2020 Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park

In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.

Adversarial Robustness

Emergence of Separable Manifolds in Deep Language Representations

1 code implementation ICML 2020 Jonathan Mamou, Hang Le, Miguel Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, SueYeon Chung

In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds' radius, dimensionality and inter-manifold correlations.

Probing emergent geometry in speech models via replica theory

no code implementations28 May 2019 Suchismita Padhy, Jenelle Feather, Cory Stephenson, Oguz Elibol, Hanlin Tang, Josh Mcdermott, SueYeon Chung

The success of deep neural networks in visual tasks have motivated recent theoretical and empirical work to understand how these networks operate.

speech-recognition Speech Recognition

Classification and Geometry of General Perceptual Manifolds

no code implementations17 Oct 2017 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.

Classification General Classification +1

Learning Data Manifolds with a Cutting Plane Method

no code implementations28 May 2017 SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee

We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.

Data Augmentation

Linear Readout of Object Manifolds

no code implementations6 Dec 2015 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.

Cannot find the paper you are looking for? You can Submit a new open access paper.