no code implementations • 2 Dec 2024 • David G. Clark, Haim Sompolinsky
Statistical physics provides tools for analyzing high-dimensional problems in machine learning and theoretical neuroscience.
no code implementations • 12 Nov 2024 • Binxu Wang, Jiaqi Shang, Haim Sompolinsky
We evaluated their ability to generate structurally consistent samples and perform panel completion via unconditional and conditional sampling.
no code implementations • 26 Jul 2024 • Zechen Zhang, Haim Sompolinsky
The infinite width limit of random neural networks is known to result in Neural Networks as Gaussian Process (NNGP) (Lee et al. [2018]), characterized by task-independent kernels.
no code implementations • 14 Jul 2024 • Haozhe Shan, Qianyi Li, Haim Sompolinsky
For networks with task-specific readouts, the theory identifies a phase transition where CL performance shifts dramatically as tasks become less similar, as measured by the OPs.
no code implementations • 24 Jun 2024 • Alexander van Meegen, Haim Sompolinsky
Our findings highlight how network properties such as scaling of weights and neuronal nonlinearity can profoundly influence the emergent representations.
1 code implementation • 24 May 2024 • Lorenzo Tiberi, Francesca Mignacco, Kazuki Irie, Haim Sompolinsky
Our theory shows that the predictor statistics are expressed as a sum of independent kernels, each one pairing different 'attention paths', defined as information pathways through different attention heads across layers.
no code implementations • 21 Dec 2023 • Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung
Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches.
no code implementations • 8 Sep 2023 • Yehonatan Avidan, Qianyi Li, Haim Sompolinsky
This work closes the gap between the NTK and NNGP theories, providing a comprehensive framework for the learning process of deep wide neural networks and for analyzing dynamics in biological circuits.
no code implementations • 31 Oct 2022 • Qianyi Li, Haim Sompolinsky
The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.
no code implementations • 14 Jun 2022 • Weishun Zhong, Ben Sorscher, Daniel D Lee, Haim Sompolinsky
Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution.
no code implementations • 28 May 2022 • Ran Rubin, Haim Sompolinsky
However, for dynamical systems with event based outputs, such as spiking neural networks and other continuous time threshold crossing systems, this optimality criterion is inapplicable due to the strong temporal correlations in their input and output.
1 code implementation • 14 Apr 2022 • Naoki Hiratani, Haim Sompolinsky
In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms.
no code implementations • 14 Mar 2022 • Uri Cohen, Haim Sompolinsky
A neural population responding to multiple appearances of a single object defines a manifold in the neural response space.
no code implementations • 11 Mar 2021 • Itamar Daniel Landau, Haim Sompolinsky
Finally, we define the alignment matrix as the overlap between left and right-singular vectors of the structured connectivity, and show that the singular values of the alignment matrix determine the amplitude of macroscopic variability, while its singular vectors determine the structure.
no code implementations • 7 Dec 2020 • Qianyi Li, Haim Sompolinsky
This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity.
no code implementations • 28 Sep 2020 • Gadi Naveh, Oded Ben-David, Haim Sompolinsky, Zohar Ringel
A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs).
no code implementations • 19 Aug 2020 • Julia Steinberg, Madhu Advani, Haim Sompolinsky
We find that sparse expansion of the input of a student perceptron network both increases its capacity and improves the generalization performance of the network when learning a noisy rule from a teacher perceptron when these expansions are pruned after learning.
no code implementations • 2 Apr 2020 • Gadi Naveh, Oded Ben-David, Haim Sompolinsky, Zohar Ringel
A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs).
no code implementations • 17 Oct 2017 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.
no code implementations • 28 May 2017 • SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.
no code implementations • 3 May 2017 • Ran Rubin, L. F. Abbott, Haim Sompolinsky
To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks.
no code implementations • NeurIPS 2016 • Jonathan Kadmon, Haim Sompolinsky
Deep neural networks have received a considerable attention due to the success of their training for real world machine learning applications.
no code implementations • 6 Dec 2015 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.
no code implementations • NeurIPS 2010 • Surya Ganguli, Haim Sompolinsky
Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network.
no code implementations • NeurIPS 2010 • Kanaka Rajan, L Abbott, Haim Sompolinsky
How are the spatial patterns of spontaneous and evoked population responses related?