no code implementations • 27 Feb 2025 • Hyunmo Kang, Abdulkadir Canatar, SueYeon Chung
Measuring representational similarity between neural recordings and computational models is challenging due to constraints on the number of neurons that can be recorded simultaneously.
no code implementations • 20 Feb 2025 • Chanwoo Chun, Abdulkadir Canatar, SueYeon Chung, Daniel D. Lee
In both artificial and biological systems, the centered kernel alignment (CKA) has become a widely used tool for quantifying neural representation similarity.
1 code implementation • 6 Dec 2024 • Abdulkadir Canatar, SueYeon Chung
A key problem in deep learning and computational neuroscience is relating the geometrical properties of neural representations to task performance.
1 code implementation • NeurIPS 2023 • Abdulkadir Canatar, Jenelle Feather, Albert Wakhloo, SueYeon Chung
The representations of neural networks are often compared to those of biological systems by performing regression between the neural network responses and those measured from biological systems.
no code implementations • 14 Jun 2022 • Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, Ruslan Shaydulin
Quantum computers are known to provide speedups over classical state-of-the-art machine learning methods in some specialized settings.
1 code implementation • NeurIPS 2021 • Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan
Here, we study generalization in kernel regression when the training and test distributions are different using methods from statistical physics.
BIG-bench Machine Learning
Out-of-Distribution Generalization
+1
1 code implementation • NeurIPS 2021 • Jacob A. Zavatone-Veth, Abdulkadir Canatar, Benjamin S. Ruben, Cengiz Pehlevan
However, our theoretical understanding of how the learned hidden layer representations of finite networks differ from the fixed representations of infinite networks remains incomplete.
1 code implementation • 23 Jun 2020 • Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan
We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep neural networks in the infinite-width limit.
1 code implementation • ICML 2020 • Blake Bordelon, Abdulkadir Canatar, Cengiz Pehlevan
We derive analytical expressions for the generalization performance of kernel regression as a function of the number of training samples using theoretical methods from Gaussian processes and statistical physics.