Search Results for author: Ben Adcock

Found 15 papers, 7 papers with code

Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks

no code implementations4 Apr 2024 Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga

For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning.

Uncertainty Quantification

A unified framework for learning with nonlinear model classes from arbitrary linear samples

no code implementations25 Nov 2023 Ben Adcock, Juan M. Cardenas, Nick Dexter

In summary, our work not only introduces a unified way to study learning unknown objects from general types of data, but also establishes a series of general theoretical guarantees which consolidate and improve various known results.

Active Learning Generalization Bounds

CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions

no code implementations NeurIPS 2023 Ben Adcock, Juan M. Cardenas, Nick Dexter

Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function.

Active Learning

Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods

no code implementations5 Jan 2023 Ben Adcock, Matthew J. Colbrook, Maksym Neyra-Nesterenko

However, sharpness involves problem-specific constants that are typically unknown, and previous restart schemes reduce convergence rates.

CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning

1 code implementation25 Aug 2022 Ben Adcock, Juan M. Cardenas, Nick Dexter

In this work, we propose an adaptive sampling strategy, CAS4DL (Christoffel Adaptive Sampling for Deep Learning) to increase the sample efficiency of DL for multivariate function approximation.

Uncertainty Quantification

Monte Carlo is a good sampling strategy for polynomial approximation in high dimensions

no code implementations18 Aug 2022 Ben Adcock, Simone Brugiapaglia

We show that there is a least-squares approximation based on $m$ Monte Carlo samples whose error decays algebraically fast in $m/\log(m)$, with a rate that is the same as that of the best $n$-term polynomial approximation.

Uncertainty Quantification

On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples

no code implementations25 Mar 2022 Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga

On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions.

Uncertainty Quantification

NESTANets: Stable, accurate and efficient neural networks for analysis-sparse inverse problems

1 code implementation2 Mar 2022 Maksym Neyra-Nesterenko, Ben Adcock

With the advent of deep learning, deep neural networks have significant potential to outperform existing state-of-the-art, model-based methods for solving inverse problems.

Rolling Shutter Correction

Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data

no code implementations11 Dec 2020 Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga

Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space.

The gap between theory and practice in function approximation with deep neural networks

1 code implementation16 Jan 2020 Ben Adcock, Nick Dexter

Our main conclusion from these experiments is that there is a crucial gap between the approximation theory of DNNs and their practical performance, with trained DNNs performing relatively poorly on functions for which there are strong approximation results (e. g. smooth functions), yet performing well in comparison to best-in-class methods for other functions.

Computational Efficiency Decision Making

The troublesome kernel -- On hallucinations, no free lunches and the accuracy-stability trade-off in inverse problems

1 code implementation5 Jan 2020 Nina M. Gottschling, Vegard Antun, Anders C. Hansen, Ben Adcock

In inverse problems in imaging, the focus of this paper, there is increasing empirical evidence that methods may suffer from hallucinations, i. e., false, but realistic-looking artifacts; instability, i. e., sensitivity to perturbations in the data; and unpredictable generalization, i. e., excellent performance on some images, but significant deterioration on others.

Hallucination Image Classification

Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing

no code implementations24 May 2019 Ben Adcock, Simone Brugiapaglia, Matthew King-Roskamp

A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements.

Image Reconstruction Information Theory Information Theory

Convolutional Analysis Operator Learning: Dependence on Training Data

3 code implementations21 Feb 2019 Il Yong Chun, David Hong, Ben Adcock, Jeffrey A. Fessler

Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierarchical) convolutional sparsifying operators or autoencoders from large datasets.

Open-Ended Question Answering Operator learning

On instabilities of deep learning in image reconstruction - Does AI come at a cost?

1 code implementation14 Feb 2019 Vegard Antun, Francesco Renna, Clarice Poon, Ben Adcock, Anders C. Hansen

Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field.

Image Classification Image Reconstruction

On oracle-type local recovery guarantees in compressed sensing

1 code implementation11 Jun 2018 Ben Adcock, Claire Boyer, Simone Brugiapaglia

We present improved sampling complexity bounds for stable and robust sparse recovery in compressed sensing.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.