no code implementations • 4 Sep 2024 • Ben Adcock
We introduce the Christoffel function as a key quantity in the analysis of (weighted) least-squares approximation from random samples, then show how it can be used to construct sampling strategies that possess near-optimal sample complexity: namely, the number of samples scales log-linearly in $n$, the dimension of the approximation space.
no code implementations • 5 Aug 2024 • Alexander Bastounis, Paolo Campodonico, Mihaela van der Schaar, Ben Adcock, Anders C. Hansen
The paradox is that there exists a non-consistently reasoning AI (which therefore cannot be on the level of human intelligence) that will be correct on the same set of problems.
no code implementations • 20 Jun 2024 • Ben Adcock, Nick Dexter, Sebastian Moraga
We first identify a family of DNNs such that the resulting Deep Learning (DL) procedure achieves optimal generalization bounds for such operators.
no code implementations • 4 Apr 2024 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning.
no code implementations • 25 Nov 2023 • Ben Adcock, Juan M. Cardenas, Nick Dexter
In summary, our work not only introduces a unified way to study learning unknown objects from general types of data, but also establishes a series of general theoretical guarantees which consolidate and improve various known results.
no code implementations • NeurIPS 2023 • Ben Adcock, Juan M. Cardenas, Nick Dexter
Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function.
1 code implementation • 5 Jan 2023 • Ben Adcock, Matthew J. Colbrook, Maksym Neyra-Nesterenko
However, sharpness involves problem-specific constants that are typically unknown, and restart schemes typically reduce convergence rates.
1 code implementation • 25 Aug 2022 • Ben Adcock, Juan M. Cardenas, Nick Dexter
In this work, we propose an adaptive sampling strategy, CAS4DL (Christoffel Adaptive Sampling for Deep Learning) to increase the sample efficiency of DL for multivariate function approximation.
no code implementations • 18 Aug 2022 • Ben Adcock, Simone Brugiapaglia
We show that there is a least-squares approximation based on $m$ Monte Carlo samples whose error decays algebraically fast in $m/\log(m)$, with a rate that is the same as that of the best $n$-term polynomial approximation.
no code implementations • 25 Mar 2022 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions.
1 code implementation • 2 Mar 2022 • Maksym Neyra-Nesterenko, Ben Adcock
With the advent of deep learning, deep neural networks have significant potential to outperform existing state-of-the-art, model-based methods for solving inverse problems.
no code implementations • 11 Dec 2020 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space.
1 code implementation • 16 Jan 2020 • Ben Adcock, Nick Dexter
Our main conclusion from these experiments is that there is a crucial gap between the approximation theory of DNNs and their practical performance, with trained DNNs performing relatively poorly on functions for which there are strong approximation results (e. g. smooth functions), yet performing well in comparison to best-in-class methods for other functions.
1 code implementation • 5 Jan 2020 • Nina M. Gottschling, Vegard Antun, Anders C. Hansen, Ben Adcock
In inverse problems in imaging, the focus of this paper, there is increasing empirical evidence that methods may suffer from hallucinations, i. e., false, but realistic-looking artifacts; instability, i. e., sensitivity to perturbations in the data; and unpredictable generalization, i. e., excellent performance on some images, but significant deterioration on others.
no code implementations • 24 May 2019 • Ben Adcock, Simone Brugiapaglia, Matthew King-Roskamp
A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements.
Image Reconstruction Information Theory Information Theory
3 code implementations • 21 Feb 2019 • Il Yong Chun, David Hong, Ben Adcock, Jeffrey A. Fessler
Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierarchical) convolutional sparsifying operators or autoencoders from large datasets.
1 code implementation • 14 Feb 2019 • Vegard Antun, Francesco Renna, Clarice Poon, Ben Adcock, Anders C. Hansen
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field.
1 code implementation • 11 Jun 2018 • Ben Adcock, Claire Boyer, Simone Brugiapaglia
We present improved sampling complexity bounds for stable and robust sparse recovery in compressed sensing.
Information Theory Information Theory