Search Results for author: Robert D. Nowak

Found 22 papers, 2 papers with code

Weighted variation spaces and approximation by shallow ReLU networks

no code implementations28 Jul 2023 Ronald DeVore, Robert D. Nowak, Rahul Parhi, Jonathan W. Siegel

A new and more proper definition of model classes on domains is given by introducing the concept of weighted variation spaces.

Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression

1 code implementation25 May 2023 Joseph Shenouda, Rahul Parhi, Kangwook Lee, Robert D. Nowak

This representer theorem establishes that shallow vector-valued neural networks are the solutions to data-fitting problems over these infinite-dimensional spaces, where the network widths are bounded by the square of the number of training data.

Multi-Task Learning Neural Network Compression

Filtered Iterative Denoising for Linear Inverse Problems

no code implementations15 Feb 2023 Danica Fliss, Willem Marais, Robert D. Nowak

These are often referred to as ``plug-and-play" (PnP) methods because, in principle, an off-the-shelf denoiser can be used for a variety of different inverse problems.

Denoising

Deep Learning Meets Sparse Regularization: A Signal Processing Perspective

no code implementations23 Jan 2023 Rahul Parhi, Robert D. Nowak

Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks.

PathProx: A Proximal Gradient Algorithm for Weight Decay Regularized Deep Neural Networks

no code implementations6 Oct 2022 Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, Robert D. Nowak

Weight decay is one of the most widely used forms of regularization in deep learning, and has been shown to improve generalization and robustness.

Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks

no code implementations18 Sep 2021 Rahul Parhi, Robert D. Nowak

We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks.

What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory

no code implementations7 May 2021 Rahul Parhi, Robert D. Nowak

The function space consists of compositions of functions from the Banach spaces of second-order bounded variation in the Radon domain.

Banach Space Representer Theorems for Neural Networks and Ridge Splines

no code implementations10 Jun 2020 Rahul Parhi, Robert D. Nowak

We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to these inverse problems.

Optimal Confidence Regions for the Multinomial Parameter

no code implementations3 Feb 2020 Matthew L. Malloy, Ardhendu Tripathy, Robert D. Nowak

More precisely, consider an empirical distribution $\widehat{\boldsymbol{p}}$ generated from $n$ iid realizations of a random variable that takes one of $k$ possible values according to an unknown distribution $\boldsymbol{p}$.

Decision Making

The Role of Neural Network Activation Functions

no code implementations5 Oct 2019 Rahul Parhi, Robert D. Nowak

A wide variety of activation functions have been proposed for neural networks.

MaxiMin Active Learning in Overparameterized Model Classes}

no code implementations29 May 2019 Mina Karzand, Robert D. Nowak

Generating labeled training datasets has become a major bottleneck in Machine Learning (ML) pipelines.

Active Learning BIG-bench Machine Learning +1

Tensor Methods for Nonlinear Matrix Completion

no code implementations26 Apr 2018 Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak

This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.

Low-Rank Matrix Completion

Algebraic Variety Models for High-Rank Matrix Completion

1 code implementation ICML 2017 Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano

We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.

Clustering Low-Rank Matrix Completion +1

A Characterization of Deterministic Sampling Patterns for Low-Rank Matrix Completion

no code implementations9 Mar 2015 Daniel L. Pimentel-Alarcón, Nigel Boston, Robert D. Nowak

Finite completability is the tipping point in LRMC, as a few additional samples of a finitely completable matrix guarantee its unique completability.

Low-Rank Matrix Completion

Deterministic Conditions for Subspace Identifiability from Incomplete Sampling

no code implementations2 Oct 2014 Daniel L. Pimentel-Alarcón, Robert D. Nowak, Nigel Boston

Consider a generic $r$-dimensional subspace of $\mathbb{R}^d$, $r<d$, and suppose that we are only given projections of this subspace onto small subsets of the canonical coordinates.

Sparse Estimation with Strongly Correlated Variables using Ordered Weighted L1 Regularization

no code implementations14 Sep 2014 Mario A. T. Figueiredo, Robert D. Nowak

This paper studies ordered weighted L1 (OWL) norm regularization for sparse estimation problems with strongly correlated variables.

Clustering

Active Learning for Undirected Graphical Model Selection

no code implementations13 Apr 2014 Divyanshu Vats, Robert D. Nowak, Richard G. Baraniuk

This paper studies graphical model selection, i. e., the problem of estimating a graph of statistical relationships among a collection of random variables.

Active Learning Model Selection

Near-Optimal Adaptive Compressed Sensing

no code implementations26 Jun 2013 Matthew L. Malloy, Robert D. Nowak

The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be near-optimal in that it succeeds at the lowest possible signal-to-noise-ratio (SNR) levels, improving on previous work in adaptive compressed sensing.

The Sample Complexity of Search over Multiple Populations

no code implementations6 Sep 2012 Matthew L. Malloy, Gongguo Tang, Robert D. Nowak

We consider a large number of populations, each corresponding to either distribution P0 or P1.

Active Ranking using Pairwise Comparisons

no code implementations NeurIPS 2011 Kevin G. Jamieson, Robert D. Nowak

We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average.

The Geometry of Generalized Binary Search

no code implementations22 Oct 2009 Robert D. Nowak

GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries.

Cannot find the paper you are looking for? You can Submit a new open access paper.