Search Results for author: Parthe Pandit

Found 18 papers, 5 papers with code

On the Nystrom Approximation for Preconditioning in Kernel Machines

no code implementations6 Dec 2023 Amirhesam Abedsoltan, Parthe Pandit, Luis Rademacher, Mikhail Belkin

Scalable algorithms for learning kernel models need to be iterative in nature, but convergence can be slow due to poor conditioning.

Mechanism of feature learning in convolutional neural networks

1 code implementation1 Sep 2023 Daniel Beaglehole, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin

We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines.

Toward Large Kernel Models

1 code implementation6 Feb 2023 Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit

Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets.

Instability and Local Minima in GAN Training with Kernel Discriminators

no code implementations21 Aug 2022 Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data.

Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting

no code implementations14 Jul 2022 Neil Mallinar, James B. Simon, Amirhesam Abedsoltan, Parthe Pandit, Mikhail Belkin, Preetum Nakkiran

In this work we argue that while benign overfitting has been instructive and fruitful to study, many real interpolating methods like neural networks do not fit benignly: modest noise in the training set causes nonzero (but non-infinite) excess risk at test time, implying these models are neither benign nor catastrophic but rather fall in an intermediate regime.

Learning Theory

A note on Linear Bottleneck networks and their Transition to Multilinearity

no code implementations30 Jun 2022 Libin Zhu, Parthe Pandit, Mikhail Belkin

In this work we show that linear networks with a bottleneck layer learn bilinear functions of the weights, in a ball of radius $O(1)$ around initialization.

On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions

no code implementations26 May 2022 Daniel Beaglehole, Mikhail Belkin, Parthe Pandit

``Benign overfitting'', the ability of certain algorithms to interpolate noisy training data and yet perform well out-of-sample, has been a topic of considerable recent interest.

regression Translation

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

no code implementations20 Jan 2022 Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.

Implicit Bias of Linear RNNs

no code implementations19 Jan 2021 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.

Matrix Inference and Estimation in Multi-Layer Models

1 code implementation NeurIPS 2020 Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed.

Imputation

Low-Rank Nonlinear Decoding of $μ$-ECoG from the Primary Auditory Cortex

no code implementations6 May 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi

This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.

Dimensionality Reduction

Generalization Error of Generalized Linear Models in High Dimensions

3 code implementations ICML 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.

BIG-bench Machine Learning regression +1

Inference in Multi-Layer Networks with Matrix-Valued Unknowns

no code implementations26 Jan 2020 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.

Inference with Deep Generative Priors in High Dimensions

no code implementations8 Nov 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.

Vocal Bursts Intensity Prediction

High-Dimensional Bernoulli Autoregressive Process with Long-Range Dependence

no code implementations19 Mar 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Arash A. Amini, Sundeep Rangan, Alyson K. Fletcher

We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag $p$ and other key statistical properties of the model.

Gaussian Processes Vocal Bursts Intensity Prediction

Asymptotics of MAP Inference in Deep Networks

no code implementations1 Mar 2019 Parthe Pandit, Mojtaba Sahraee, Sundeep Rangan, Alyson K. Fletcher

Deep generative priors are a powerful tool for reconstruction problems with complex data such as images and text.

Cannot find the paper you are looking for? You can Submit a new open access paper.