Search Results for author: SueYeon Chung

Found 25 papers, 9 papers with code

Neural population geometry and optimal coding of tasks with shared latent structure

no code implementations26 Feb 2024 Albert J. Wakhloo, Will Slatton, SueYeon Chung

Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world.

Multi-Task Learning Navigate

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds

no code implementations21 Dec 2023 Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung

Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches.

A Spectral Theory of Neural Prediction and Alignment

1 code implementation NeurIPS 2023 Abdulkadir Canatar, Jenelle Feather, Albert Wakhloo, SueYeon Chung

The representations of neural networks are often compared to those of biological systems by performing regression between the neural network responses and those measured from biological systems.

regression

Neuroscience needs Network Science

no code implementations10 May 2023 Dániel L Barabási, Ginestra Bianconi, Ed Bullmore, Mark Burgess, SueYeon Chung, Tina Eliassi-Rad, Dileep George, István A. Kovács, Hernán Makse, Christos Papadimitriou, Thomas E. Nichols, Olaf Sporns, Kim Stachenfeld, Zoltán Toroczkai, Emma K. Towlson, Anthony M Zador, Hongkui Zeng, Albert-László Barabási, Amy Bernard, György Buzsáki

We explore the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease, and discuss the potential for collaboration between network science and neuroscience communities.

Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations

1 code implementation NeurIPS 2023 Thomas Yerxa, Yilun Kuang, Eero Simoncelli, SueYeon Chung

The resulting method is closely related to and inspired by advances in the field of self supervised learning (SSL), and we demonstrate that MMCRs are competitive with state of the art results on standard SSL benchmarks.

Contrastive Learning Object Recognition +1

Linear Classification of Neural Manifolds with Correlated Variability

no code implementations27 Nov 2022 Albert J. Wakhloo, Tamara J. Sussman, SueYeon Chung

Understanding how the statistical and geometric properties of neural activity relate to performance is a key problem in theoretical neuroscience and deep learning.

Classification Object

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

1 code implementation5 Feb 2022 Samuel Lippl, L. F. Abbott, SueYeon Chung

Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.

Inductive Bias

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

1 code implementation NeurIPS 2021 Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.

Adversarial Robustness

Divisive Feature Normalization Improves Image Recognition Performance in AlexNet

1 code implementation ICLR 2022 Michelle Miller, SueYeon Chung, Kenneth D. Miller

In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields.

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

no code implementations26 Aug 2021 Landan Seguin, Anthony Ndirango, Neeli Mishra, SueYeon Chung, Tyler Lee

Motivated by a recent study on learning robustness without input perturbations by distilling an AT model, we explore what is learned during adversarial training by analyzing the distribution of logits in AT models.

Adversarial Robustness

Credit Assignment Through Broadcasting a Global Error Vector

1 code implementation NeurIPS 2021 David G. Clark, L. F. Abbott, SueYeon Chung

We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.

Statistical Mechanics of Neural Processing of Object Manifolds

no code implementations1 Jun 2021 SueYeon Chung

In this thesis, we generalize Gardner's analysis and establish a theory of linear classification of manifolds synthesizing statistical and geometric properties of high dimensional signals.

Object Object Recognition

On the geometry of generalization and memorization in deep neural networks

no code implementations ICLR 2021 Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, SueYeon Chung

Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance.

Memorization

Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models

no code implementations ACL (RepL4NLP) 2021 Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung

While vector-based language representations from pretrained language models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings.

Sentence

Neural population geometry: An approach for understanding biological and artificial neural networks

no code implementations14 Apr 2021 SueYeon Chung, L. F. Abbott

One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i. e., neural population geometry.

BIG-bench Machine Learning Disentanglement

Representational correlates of hierarchical phrase structure in deep language models

no code implementations1 Jan 2021 Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung

Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences.

Sentence

On 1/n neural representation and robustness

1 code implementation NeurIPS 2020 Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park

In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.

Adversarial Robustness

Emergence of Separable Manifolds in Deep Language Representations

1 code implementation ICML 2020 Jonathan Mamou, Hang Le, Miguel Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, SueYeon Chung

In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds' radius, dimensionality and inter-manifold correlations.

Probing emergent geometry in speech models via replica theory

no code implementations28 May 2019 Suchismita Padhy, Jenelle Feather, Cory Stephenson, Oguz Elibol, Hanlin Tang, Josh Mcdermott, SueYeon Chung

The success of deep neural networks in visual tasks have motivated recent theoretical and empirical work to understand how these networks operate.

speech-recognition Speech Recognition

Classification and Geometry of General Perceptual Manifolds

no code implementations17 Oct 2017 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.

Classification General Classification +2

Learning Data Manifolds with a Cutting Plane Method

no code implementations28 May 2017 SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee

We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.

Data Augmentation

Linear Readout of Object Manifolds

no code implementations6 Dec 2015 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.