Search Results for author: Alex H. Williams

Found 11 papers, 5 papers with code

Duality of Bures and Shape Distances with Implications for Comparing Neural Representations

no code implementations19 Nov 2023 Sarah E. Harvey, Brett W. Larsen, Alex H. Williams

A multitude of (dis)similarity measures between neural network representations have been proposed, resulting in a fragmented research landscape.

Soft Matching Distance: A metric on neural representations that captures single-neuron tuning

no code implementations16 Nov 2023 Meenakshi Khosla, Alex H. Williams

Common measures of neural representational (dis)similarity are designed to be insensitive to rotations and reflections of the neural activation space.

Estimating Shape Distances on Neural Representations with Limited Samples

1 code implementation9 Oct 2023 Dean A. Pospisil, Brett W. Larsen, Sarah E. Harvey, Alex H. Williams

Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning.

Representational dissimilarity metric spaces for stochastic neural networks

1 code implementation21 Nov 2022 Lyndon R. Duong, Jingyang Zhou, Josue Nassar, Jules Berman, Jeroen Olieslagers, Alex H. Williams

Quantifying similarity between neural representations -- e. g. hidden layer activation vectors -- is a perennial problem in deep learning and neuroscience research.

Spatiotemporal Clustering with Neyman-Scott Processes via Connections to Bayesian Nonparametric Mixture Models

1 code implementation13 Jan 2022 Yixin Wang, Anthony Degleris, Alex H. Williams, Scott W. Linderman

This construction is similar to Bayesian nonparametric mixture models like the Dirichlet process mixture model (DPMM) in that the number of latent events (i. e. clusters) is a random variable, but the point process formulation makes the NSP especially well suited to modeling spatiotemporal data.

Bayesian Inference Clustering +1

Generalized Shape Metrics on Neural Representations

2 code implementations NeurIPS 2021 Alex H. Williams, Erin Kunz, Simon Kornblith, Scott W. Linderman

In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.

Statistical Neuroscience in the Single Trial Limit

no code implementations8 Mar 2021 Alex H. Williams, Scott W. Linderman

Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state.

Universality and individuality in neural dynamics across large populations of recurrent networks

no code implementations NeurIPS 2019 Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo

To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics.

Cannot find the paper you are looking for? You can Submit a new open access paper.