1 code implementation • 25 Feb 2025 • Victor Geadah, Amin Nejatbakhsh, David Lipshutz, Jonathan W. Pillow, Alex H. Williams
Neural population activity exhibits complex, nonlinear dynamics, varying in time, over trials, and across experimental conditions.
no code implementations • 19 Dec 2024 • Amin Nejatbakhsh, Victor Geadah, Alex H. Williams, David Lipshutz
Biological and artificial neural systems form high-dimensional neural representations that underpin their computational capabilities.
no code implementations • 12 Nov 2024 • Sarah E. Harvey, David Lipshutz, Alex H. Williams
Neural responses encode information that is useful for a variety of downstream tasks.
no code implementations • 20 Oct 2024 • Jenelle Feather, David Lipshutz, Sarah E. Harvey, Alex H. Williams, Eero P. Simoncelli
This metric may then be used to optimally differentiate a set of models, by finding a pair of "principal distortions" that maximize the variance of the models under this metric.
no code implementations • 11 Aug 2024 • Brett W. Larsen, Tamara G. Kolda, Anru R. Zhang, Alex H. Williams
We refer to tensors with some infinite-dimensional modes as quasitensors, and the approach of decomposing a tensor with some continuous RKHS modes is referred to as CP-HiFi (hybrid infinite and finite dimensional) tensor decomposition.
no code implementations • 19 Nov 2023 • Sarah E. Harvey, Brett W. Larsen, Alex H. Williams
A multitude of (dis)similarity measures between neural network representations have been proposed, resulting in a fragmented research landscape.
no code implementations • 16 Nov 2023 • Meenakshi Khosla, Alex H. Williams
Common measures of neural representational (dis)similarity are designed to be insensitive to rotations and reflections of the neural activation space.
1 code implementation • 9 Oct 2023 • Dean A. Pospisil, Brett W. Larsen, Sarah E. Harvey, Alex H. Williams
Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning.
1 code implementation • 21 Nov 2022 • Lyndon R. Duong, Jingyang Zhou, Josue Nassar, Jules Berman, Jeroen Olieslagers, Alex H. Williams
Quantifying similarity between neural representations -- e. g. hidden layer activation vectors -- is a perennial problem in deep learning and neuroscience research.
1 code implementation • 13 Jan 2022 • Yixin Wang, Anthony Degleris, Alex H. Williams, Scott W. Linderman
This construction is similar to Bayesian nonparametric mixture models like the Dirichlet process mixture model (DPMM) in that the number of latent events (i. e. clusters) is a random variable, but the point process formulation makes the NSP especially well suited to modeling spatiotemporal data.
2 code implementations • NeurIPS 2021 • Alex H. Williams, Erin Kunz, Simon Kornblith, Scott W. Linderman
In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
no code implementations • 8 Mar 2021 • Alex H. Williams, Scott W. Linderman
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state.
1 code implementation • NeurIPS 2020 • Alex H. Williams, Anthony Degleris, Yixin Wang, Scott W. Linderman
Sparse sequences of neural spikes are posited to underlie aspects of working memory, motor production, and learning.
no code implementations • NeurIPS 2019 • Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo
To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics.
no code implementations • 29 Jun 2019 • Anthony Degleris, Ben Antin, Surya Ganguli, Alex H. Williams
Identifying recurring patterns in high-dimensional time series data is an important problem in many scientific domains.
no code implementations • ICML Workshop Deep_Phenomen 2019 • Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo
Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data.