Search Results for author: Ernesto de Vito

Found 15 papers, 4 papers with code

Neural reproducing kernel Banach spaces and representer theorems for deep networks

no code implementations13 Mar 2024 Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias.

Inductive Bias

Efficient Numerical Integration in Reproducing Kernel Hilbert Spaces via Leverage Scores Sampling

1 code implementation22 Nov 2023 Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco

In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.

Numerical Integration

Regularized ERM on random subspaces

no code implementations4 Dec 2022 Andrea Della Vecchia, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

Computational Efficiency

Efficient Hyperparameter Tuning for Large Scale Kernel Ridge Regression

1 code implementation17 Jan 2022 Giacomo Meanti, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.

regression

Mean Nyström Embeddings for Adaptive Compressive Learning

1 code implementation21 Oct 2021 Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.

Understanding neural networks with reproducing kernel Banach spaces

no code implementations20 Sep 2021 Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.

Learning the optimal Tikhonov regularizer for inverse problems

1 code implementation NeurIPS 2021 Giovanni S. Alberti, Ernesto de Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria

Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$.

Deblurring Denoising +1

Interpolation and Learning with Scale Dependent Kernels

no code implementations17 Jun 2020 Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco

We study the learning properties of nonparametric ridge-less least squares.

Regularized ERM on random subspaces

no code implementations17 Jun 2020 Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

Computational Efficiency

Multi-Scale Vector Quantization with Reconstruction Trees

no code implementations8 Jul 2019 Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco

Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.

Quantization

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

no code implementations27 May 2019 Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.

Scale Invariant Interest Points with Shearlets

no code implementations26 Jul 2016 Miguel A. Duval-Poo, Nicoletta Noceti, Francesca Odone, Ernesto de Vito

We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian.

Learning Sets with Separating Kernels

no code implementations16 Apr 2012 Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo

We consider the problem of learning a set from random samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.