no code implementations • 29 Jan 2024 • Giovanni S. Alberti, Luca Ratti, Matteo Santacesaria, Silvia Sciutto
In inverse problems, it is widely recognized that the incorporation of a sparsity prior yields a regularization effect on the solution.
1 code implementation • 27 Mar 2023 • Giovanni S. Alberti, Johannes Hertrich, Matteo Santacesaria, Silvia Sciutto
Representing a manifold of very high-dimensional data with generative models has been shown to be computationally efficient in practice.
no code implementations • 10 Jun 2022 • Rima Alaifari, Giovanni S. Alberti, Tandri Gauksson
As interest in deep neural networks (DNNs) for image reconstruction tasks grows, their reliability has been called into question (Antun et al., 2020; Gottschling et al., 2020).
1 code implementation • 29 May 2022 • Giovanni S. Alberti, Matteo Santacesaria, Silvia Sciutto
In this work, we present and study Continuous Generative Neural Networks (CGNNs), namely, generative models in the continuous setting: the output of a CGNN belongs to an infinite-dimensional function space.
1 code implementation • NeurIPS 2021 • Giovanni S. Alberti, Ernesto de Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$.
2 code implementations • ICLR 2019 • Rima Alaifari, Giovanni S. Alberti, Tandri Gauksson
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.