Search Results for author: Rahul Parhi

Found 8 papers, 1 papers with code

Function-Space Optimality of Neural Architectures With Multivariate Nonlinearities

no code implementations5 Oct 2023 Rahul Parhi, Michael Unser

We investigate the function-space optimality (specifically, the Banach-space optimality) of a large class of shallow neural architectures with multivariate nonlinearities/activation functions.

Weighted variation spaces and approximation by shallow ReLU networks

no code implementations28 Jul 2023 Ronald DeVore, Robert D. Nowak, Rahul Parhi, Jonathan W. Siegel

A new and more proper definition of model classes on domains is given by introducing the concept of weighted variation spaces.

Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression

1 code implementation25 May 2023 Joseph Shenouda, Rahul Parhi, Kangwook Lee, Robert D. Nowak

This representer theorem establishes that shallow vector-valued neural networks are the solutions to data-fitting problems over these infinite-dimensional spaces, where the network widths are bounded by the square of the number of training data.

Multi-Task Learning Neural Network Compression

Deep Learning Meets Sparse Regularization: A Signal Processing Perspective

no code implementations23 Jan 2023 Rahul Parhi, Robert D. Nowak

Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks.

Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks

no code implementations18 Sep 2021 Rahul Parhi, Robert D. Nowak

We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks.

What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory

no code implementations7 May 2021 Rahul Parhi, Robert D. Nowak

The function space consists of compositions of functions from the Banach spaces of second-order bounded variation in the Radon domain.

Banach Space Representer Theorems for Neural Networks and Ridge Splines

no code implementations10 Jun 2020 Rahul Parhi, Robert D. Nowak

We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to these inverse problems.

The Role of Neural Network Activation Functions

no code implementations5 Oct 2019 Rahul Parhi, Robert D. Nowak

A wide variety of activation functions have been proposed for neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.