no code implementations • 5 Oct 2023 • Rahul Parhi, Michael Unser
We investigate the function-space optimality (specifically, the Banach-space optimality) of a large class of shallow neural architectures with multivariate nonlinearities/activation functions.
no code implementations • 28 Jul 2023 • Ronald DeVore, Robert D. Nowak, Rahul Parhi, Jonathan W. Siegel
A new and more proper definition of model classes on domains is given by introducing the concept of weighted variation spaces.
1 code implementation • 25 May 2023 • Joseph Shenouda, Rahul Parhi, Kangwook Lee, Robert D. Nowak
This representer theorem establishes that shallow vector-valued neural networks are the solutions to data-fitting problems over these infinite-dimensional spaces, where the network widths are bounded by the square of the number of training data.
no code implementations • 23 Jan 2023 • Rahul Parhi, Robert D. Nowak
Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks.
no code implementations • 18 Sep 2021 • Rahul Parhi, Robert D. Nowak
We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks.
no code implementations • 7 May 2021 • Rahul Parhi, Robert D. Nowak
The function space consists of compositions of functions from the Banach spaces of second-order bounded variation in the Radon domain.
no code implementations • 10 Jun 2020 • Rahul Parhi, Robert D. Nowak
We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to these inverse problems.
no code implementations • 5 Oct 2019 • Rahul Parhi, Robert D. Nowak
A wide variety of activation functions have been proposed for neural networks.