Search Results for author: Abdulkadir Canatar

Found 9 papers, 6 papers with code

Spectral Analysis of Representational Similarity with Limited Neurons

no code implementations27 Feb 2025 Hyunmo Kang, Abdulkadir Canatar, SueYeon Chung

Measuring representational similarity between neural recordings and computational models is challenging due to constraints on the number of neurons that can be recorded simultaneously.

Denoising

Estimating Neural Representation Alignment from Sparsely Sampled Inputs and Features

no code implementations20 Feb 2025 Chanwoo Chun, Abdulkadir Canatar, SueYeon Chung, Daniel D. Lee

In both artificial and biological systems, the centered kernel alignment (CKA) has become a widely used tool for quantifying neural representation similarity.

Statistical Mechanics of Support Vector Regression

1 code implementation6 Dec 2024 Abdulkadir Canatar, SueYeon Chung

A key problem in deep learning and computational neuroscience is relating the geometrical properties of neural representations to task performance.

regression

A Spectral Theory of Neural Prediction and Alignment

1 code implementation NeurIPS 2023 Abdulkadir Canatar, Jenelle Feather, Albert Wakhloo, SueYeon Chung

The representations of neural networks are often compared to those of biological systems by performing regression between the neural network responses and those measured from biological systems.

Prediction regression

Bandwidth Enables Generalization in Quantum Kernel Models

no code implementations14 Jun 2022 Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, Ruslan Shaydulin

Quantum computers are known to provide speedups over classical state-of-the-art machine learning methods in some specialized settings.

Inductive Bias

Out-of-Distribution Generalization in Kernel Regression

1 code implementation NeurIPS 2021 Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan

Here, we study generalization in kernel regression when the training and test distributions are different using methods from statistical physics.

BIG-bench Machine Learning Out-of-Distribution Generalization +1

Asymptotics of representation learning in finite Bayesian neural networks

1 code implementation NeurIPS 2021 Jacob A. Zavatone-Veth, Abdulkadir Canatar, Benjamin S. Ruben, Cengiz Pehlevan

However, our theoretical understanding of how the learned hidden layer representations of finite networks differ from the fixed representations of infinite networks remains incomplete.

Representation Learning

Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks

1 code implementation23 Jun 2020 Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan

We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep neural networks in the infinite-width limit.

BIG-bench Machine Learning Inductive Bias +1

Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

1 code implementation ICML 2020 Blake Bordelon, Abdulkadir Canatar, Cengiz Pehlevan

We derive analytical expressions for the generalization performance of kernel regression as a function of the number of training samples using theoretical methods from Gaussian processes and statistical physics.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.