Search Results for author: Massimo Fornasier

Found 14 papers, 4 papers with code

Approximation Theory, Computing, and Deep Learning on the Wasserstein Space

1 code implementation30 Oct 2023 Massimo Fornasier, Pascal Heid, Giacomo Enrico Sodini

In this study, we delve into the challenging problem of the numerical approximation of Sobolev-smooth functions defined on probability spaces.

From NeurODEs to AutoencODEs: a mean-field control framework for width-varying Neural Networks

no code implementations5 Jul 2023 Cristina Cipriani, Massimo Fornasier, Alessandro Scagliotti

The connection between Residual Neural Networks (ResNets) and continuous-time control systems (known as NeurODEs) has led to a mathematical analysis of neural networks which has provided interesting results of both theoretical and practical significance.

Gradient is All You Need?

2 code implementations16 Jun 2023 Konstantin Riedl, Timo Klock, Carina Geldhauser, Massimo Fornasier

The fundamental value of such link between CBO and SGD lies in the fact that CBO is provably globally convergent to global minimizers for ample classes of nonsmooth and nonconvex objective functions, hence, on the one side, offering a novel explanation for the success of stochastic relaxations of gradient descent.

Finite Sample Identification of Wide Shallow Neural Networks with Biases

no code implementations8 Nov 2022 Massimo Fornasier, Timo Klock, Marco Mondelli, Michael Rauchensteiner

Artificial neural networks are functions depending on a finite number of parameters typically encoded as weights and biases.

Data-driven entropic spatially inhomogeneous evolutionary games

no code implementations9 Mar 2021 Mauro Bonafini, Massimo Fornasier, Bernhard Schmitzer

We prove convergence of minimizing solutions obtained from a finite number of observations to a mean field limit and the minimal value provides a quantitative error bound on the data-driven evolutions.

Optimization and Control

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

no code implementations18 Jan 2021 Christian Fiedler, Massimo Fornasier, Timo Klock, Michael Rauchensteiner

In this paper we approach the problem of unique and stable identifiability of generic deep artificial neural networks with pyramidal shape and smooth activation functions from a finite number of input-output samples.

Consensus-Based Optimization on Hypersurfaces: Well-Posedness and Mean-Field Limit

no code implementations31 Jan 2020 Massimo Fornasier, Hui Huang, Lorenzo Pareschi, Philippe Sünnen

We introduce a new stochastic differential model for global optimization of nonconvex functions on compact hypersurfaces.

Consensus-Based Optimization on the Sphere: Convergence to Global Minimizers and Machine Learning

1 code implementation31 Jan 2020 Massimo Fornasier, Hui Huang, Lorenzo Pareschi, Philippe Sünnen

To quantify the performances of the new approach, we show that the algorithm is able to perform essentially as good as ad hoc state of the art methods in challenging problems in signal processing and machine learning, namely the phase retrieval problem and the robust subspace detection.

BIG-bench Machine Learning Retrieval

Data-driven Evolutions of Critical Points

no code implementations1 Nov 2019 Stefano Almi, Massimo Fornasier, Richard Huber

In this paper we are concerned with the learnability of energies from data obtained by observing time evolutions of their critical points starting at random initial equilibria.

Robust and Resource Efficient Identification of Two Hidden Layer Neural Networks

no code implementations30 Jun 2019 Massimo Fornasier, Timo Klock, Michael Rauchensteiner

Gathering several approximate Hessians allows reliably to approximate the matrix subspace $\mathcal W$ spanned by symmetric tensors $a_1 \otimes a_1 ,\dots, a_{m_0}\otimes a_{m_0}$ formed by weights of the first layer together with the entangled symmetric tensors $v_1 \otimes v_1 ,\dots, v_{m_1}\otimes v_{m_1}$, formed by suitable combinations of the weights of the first and second layer as $v_\ell=A G_0 b_\ell/\|A G_0 b_\ell\|_2$, $\ell \in [m_1]$, for a diagonal matrix $G_0$ depending on the activation functions of the first layer.

Vocal Bursts Valence Prediction

Spatially Inhomogeneous Evolutionary Games

1 code implementation10 May 2018 Luigi Ambrosio, Massimo Fornasier, Marco Morandotti, Giuseppe Savaré

We introduce and study a mean-field model for a system of spatially distributed players interacting through an evolutionary game driven by a replicator dynamics.

Optimization and Control Dynamical Systems Functional Analysis 91A22, 37C10, 47J35, 58D25, 35Q91

Robust and Resource Efficient Identification of Shallow Neural Networks by Fewest Samples

no code implementations4 Apr 2018 Massimo Fornasier, Jan Vybíral, Ingrid Daubechies

In the case of the shallowest feed-forward neural network, second order differentiation and tensors of order two (i. e., matrices) suffice as we prove in this paper.

Robust Recovery of Low-Rank Matrices with Non-Orthogonal Sparse Decomposition from Incomplete Measurements

no code implementations18 Jan 2018 Massimo Fornasier, Johannes Maly, Valeriya Naumova

By adapting the concept of restricted isometry property from compressed sensing to our novel model class, we prove error bounds between global minimizers and ground truth, up to noise level, from a number of subgaussian measurements scaling as $R(s_1+s_2)$, up to log-factors in the dimension, and relative-to-diameter distortion.

Numerical Analysis Numerical Analysis

Learning Functions of Few Arbitrary Linear Parameters in High Dimensions

no code implementations18 Aug 2010 Massimo Fornasier, Karin Schnass, Jan Vybiral

Under certain smoothness and variation assumptions on the function $g$, and an {\it arbitrary} choice of the matrix $A$, we present in this paper 1. a sampling choice of the points $\{x_i\}$ drawn at random for each function approximation; 2. algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension $d$ and in the number $m$ of points.

Cannot find the paper you are looking for? You can Submit a new open access paper.