Search Results for author: Lorenzo Rosasco

Found 80 papers, 20 papers with code

Mean Nyström Embeddings for Adaptive Compressive Learning

no code implementations21 Oct 2021 Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.

Understanding neural networks with reproducing kernel Banach spaces

no code implementations20 Sep 2021 Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.

Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by Adaptive Discretization

no code implementations16 Jun 2021 Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco

In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.

Learning to predict target location with turbulent odor plumes

no code implementations16 Jun 2021 Nicola Rigolli, Nicodemo Magnoli, Lorenzo Rosasco, Agnese Seminara

Animal behavior and neural recordings show that the brain is able to measure both the intensity of an odor and the timing of odor encounters.

From inexact optimization to learning via gradient concentration

no code implementations9 Jun 2021 Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Optimization was recently shown to control the inductive bias in a learning process, a property referred to as implicit, or iterative regularization.

Structured Prediction for CRiSP Inverse Kinematics Learning with Misspecified Robot Models

1 code implementation25 Feb 2021 Gian Maria Marconi, Raffaello Camoriano, Lorenzo Rosasco, Carlo Ciliberto

Among these, computing the inverse kinematics of a redundant robot arm poses a significant challenge due to the non-linear structure of the robot, the hard joint constraints and the non-invertible kinematics map.

Structured Prediction

Data-efficient Weakly-supervised Learning for On-line Object Detection under Domain Shift in Robotics

no code implementations28 Dec 2020 Elisa Maiettini, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale

These methods have important limitations for robotics: Learning solely on off-line data may introduce biases (the so-called domain shift), and prevents adaptation to novel tasks.

Active Learning Line Detection +2

Fast Object Segmentation Learning with Kernel-based Methods for Robotics

1 code implementation25 Nov 2020 Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.

Semantic Segmentation

Decentralised Learning with Random Features and Distributed Gradient Descent

1 code implementation ICML 2020 Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco

Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.

For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability

no code implementations28 Jun 2020 Akshay Rangamani, Lorenzo Rosasco, Tomaso Poggio

We study the average $\mbox{CV}_{loo}$ stability of kernel ridge-less regression and derive corresponding risk bounds.

Kernel methods through the roof: handling billions of points efficiently

1 code implementation NeurIPS 2020 Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size.

Regularized ERM on random subspaces

no code implementations17 Jun 2020 Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

Iterative regularization for convex regularizers

1 code implementation17 Jun 2020 Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.

Interpolation and Learning with Scale Dependent Kernels

no code implementations17 Jun 2020 Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco

We study the learning properties of nonparametric ridge-less least squares.

Asymptotics of Ridge (less) Regression under General Source Condition

no code implementations11 Jun 2020 Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco

We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.

Hyperbolic Manifold Regression

no code implementations28 May 2020 Gian Maria Marconi, Lorenzo Rosasco, Carlo Ciliberto

Geometric representation learning has recently shown great promise in several machine learning settings, ranging from relational learning to language processing and generative models.

Relational Reasoning Representation Learning

Constructing fast approximate eigenspaces with application to the fast graph Fourier transforms

no code implementations22 Feb 2020 Cristian Rusu, Lorenzo Rosasco

We investigate numerically efficient approximations of eigenspaces associated to symmetric and general matrices.

Statistical and Computational Trade-Offs in Kernel K-Means

no code implementations NeurIPS 2018 Daniele Calandriello, Lorenzo Rosasco

We investigate the efficiency of k-means in terms of both statistical and computational requirements.

Fast approximation of orthogonal matrices and application to PCA

no code implementations18 Jul 2019 Cristian Rusu, Lorenzo Rosasco

We study the problem of approximating orthogonal matrices so that their application is numerically fast and yet accurate.

Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling

no code implementations11 Jul 2019 Nicholas Sterge, Bharath Sriperumbudur, Lorenzo Rosasco, Alessandro Rudi

In this paper, we propose and study a Nystr\"om based approach to efficient large scale kernel principal component analysis (PCA).

Multi-Scale Vector Quantization with Reconstruction Trees

no code implementations8 Jul 2019 Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco

Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.

Quantization

Implicit Regularization of Accelerated Methods in Hilbert Spaces

no code implementations NeurIPS 2019 Nicolò Pagliana, Lorenzo Rosasco

We study learning properties of accelerated gradient descent methods for linear least-squares in Hilbert spaces.

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

no code implementations27 May 2019 Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

1 code implementation13 Mar 2019 Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

Moreover, we show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where $d_{eff}$ is the effective dimension of the explored space, which is typically much smaller than both $d$ and $t$.

Gaussian Processes

Theory III: Dynamics and Generalization in Deep Networks

no code implementations12 Mar 2019 Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio

In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.

Beating SGD Saturation with Tail-Averaging and Minibatching

no code implementations NeurIPS 2019 Nicole Mücke, Gergely Neu, Lorenzo Rosasco

While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.

On Fast Leverage Score Sampling and Optimal Learning

1 code implementation NeurIPS 2018 Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco

Leverage score sampling provides an appealing way to perform approximate computations for large matrices.

Learning with SGD and Random Features

no code implementations NeurIPS 2018 Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco

Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms.

Manifold Structured Prediction

no code implementations NeurIPS 2018 Alessandro Rudi, Carlo Ciliberto, Gian Maria Marconi, Lorenzo Rosasco

Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure.

Structured Prediction

Speeding-up Object Detection Training for Robotics with FALKON

no code implementations23 Mar 2018 Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.

Object Detection Region Proposal

Iterate averaging as regularization for stochastic gradient descent

no code implementations22 Feb 2018 Gergely Neu, Lorenzo Rosasco

We propose and analyze a variant of the classic Polyak-Ruppert averaging scheme, broadly used in stochastic gradient methods.

Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces

no code implementations20 Jan 2018 Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher

In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.

Theory of Deep Learning III: explaining the non-overfitting puzzle

no code implementations30 Dec 2017 Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar

In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.

General Classification

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

no code implementations21 Oct 2017 Junhong Lin, Lorenzo Rosasco

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.

Are we done with object recognition? The iCub robot's perspective

1 code implementation28 Sep 2017 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

We report on an extensive study of the benefits and limitations of current deep learning approaches to object recognition in robot vision scenarios, introducing a novel dataset used for our investigation.

Human robot interaction Image Retrieval +2

Don't relax: early stopping for convex regularization

no code implementations18 Jul 2017 Simon Matet, Lorenzo Rosasco, Silvia Villa, Bang Long Vu

We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional.

Solving $\ell^p\!$-norm regularization with tensor kernels

no code implementations18 Jul 2017 Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco

In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.

Generalization Properties of Doubly Stochastic Learning Algorithms

no code implementations3 Jul 2017 Junhong Lin, Lorenzo Rosasco

In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.

FALKON: An Optimal Large Scale Kernel Method

4 code implementations NeurIPS 2017 Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco

In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points.

Consistent Multitask Learning with Nonlinear Output Relations

no code implementations NeurIPS 2017 Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco, Massimiliano Pontil

However, in practice assuming the tasks to be linearly related might be restrictive, and allowing for nonlinear structures is a challenge.

Structured Prediction

Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry

no code implementations28 Mar 2017 Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa

We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or {\L}ojasiewicz properties.

Optimal Learning for Multi-pass Stochastic Gradient Methods

no code implementations NeurIPS 2016 Junhong Lin, Lorenzo Rosasco

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.

Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

no code implementations2 Nov 2016 Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao

The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning.

Optimal Rates for Multi-pass Stochastic Gradient Methods

no code implementations28 May 2016 Junhong Lin, Lorenzo Rosasco

As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

Generalization Properties and Implicit Regularization for Multiple Passes SGM

1 code implementation26 May 2016 Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.

Incremental Robot Learning of New Objects with Fixed Update Time

1 code implementation17 May 2016 Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, Giorgio Metta

We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.

Active Learning Classification +2

Generalization Properties of Learning with Random Features

1 code implementation NeurIPS 2017 Alessandro Rudi, Lorenzo Rosasco

We study the generalization properties of ridge regression with random features in the statistical learning framework.

Incremental Semiparametric Inverse Dynamics Learning

no code implementations18 Jan 2016 Raffaello Camoriano, Silvio Traversaro, Lorenzo Rosasco, Giorgio Metta, Francesco Nori

This paper presents a novel approach for incremental semiparametric inverse dynamics learning.

NYTRO: When Subsampling Meets Early Stopping

1 code implementation19 Oct 2015 Tomas Angles, Raffaello Camoriano, Alessandro Rudi, Lorenzo Rosasco

Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines.

Model Selection

Holographic Embeddings of Knowledge Graphs

3 code implementations16 Oct 2015 Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.

Knowledge Graphs Link Prediction +1

Deep Convolutional Networks are Hierarchical Kernel Machines

no code implementations5 Aug 2015 Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, Tomaso Poggio

In i-theory a typical layer of a hierarchical architecture consists of HW modules pooling the dot products of the inputs to the layer with the transformations of a few templates under a group.

Less is More: Nyström Computational Regularization

1 code implementation NeurIPS 2015 Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco

We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.

Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn?

no code implementations13 Apr 2015 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot.

Image Retrieval Object Recognition

Learning Multiple Visual Tasks while Discovering their Structure

no code implementations CVPR 2015 Carlo Ciliberto, Lorenzo Rosasco, Silvia Villa

Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e. g. object detection, classification, tracking of multiple agents, or denoising, to name a few.

Denoising General Classification +2

Convex Learning of Multiple Tasks and their Structure

1 code implementation13 Apr 2015 Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco

In this context a fundamental question is how to incorporate the tasks structure in the learning problem. We tackle this question by studying a general computational framework that allows to encode a-priori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches.

Multi-Task Learning

Iterative Regularization for Learning with Convex Loss Functions

no code implementations31 Mar 2015 Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.

On Invariance and Selectivity in Representation Learning

no code implementations19 Mar 2015 Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio

We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other.

Representation Learning

On the Sample Complexity of Subspace Learning

no code implementations NeurIPS 2013 Alessandro Rudi, Guille D. Canas, Lorenzo Rosasco

A large number of algorithms in machine learning, from principal component analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral embedding and support estimation methods, rely on estimating a linear subspace from samples.

Learning An Invariant Speech Representation

no code implementations16 Jun 2014 Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo Rosasco, Tomaso Poggio

Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input.

General Classification Vowel Classification

Learning with incremental iterative regularization

no code implementations NeurIPS 2015 Lorenzo Rosasco, Silvia Villa

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method.

A Deep Representation for Invariance And Music Classification

no code implementations1 Apr 2014 Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo Rosasco, Tomaso Poggio

We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.

Classification General Classification +3

Unsupervised Learning of Invariant Representations in Hierarchical Architectures

no code implementations17 Nov 2013 Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, Tomaso Poggio

It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.

Object Recognition Speech Recognition

iCub World: Friendly Robots Help Building Good Vision Data-Sets

no code implementations15 Jun 2013 Sean Ryan Fanello, Carlo Ciliberto, Matteo Santoro, Lorenzo Natale, Giorgio Metta, Lorenzo Rosasco, Francesca Odone

In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform.

Human robot interaction Object Recognition

On Learnability, Complexity and Stability

no code implementations24 Mar 2013 Silvia Villa, Lorenzo Rosasco, Tomaso Poggio

We consider the fundamental question of learnability of a hypotheses class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik.

Multiclass Learning with Simplex Coding

no code implementations NeurIPS 2012 Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jeacques Slotine

In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification.

General Classification

Learning Probability Measures with respect to Optimal Transport Metrics

no code implementations NeurIPS 2012 Guillermo Canas, Lorenzo Rosasco

We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space.

Learning Theory Quantization

Learning Sets with Separating Kernels

no code implementations16 Apr 2012 Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo

We consider the problem of learning a set from random samples.

A Primal-Dual Algorithm for Group Sparse Regularization with Overlapping Groups

no code implementations NeurIPS 2010 Sofia Mosci, Silvia Villa, Alessandro Verri, Lorenzo Rosasco

We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori.

Variable Selection

Spectral Regularization for Support Estimation

no code implementations NeurIPS 2010 Ernesto D. Vito, Lorenzo Rosasco, Alessandro Toigo

In this paper we consider the problem of learning from data the support of a probability distribution when the distribution {\em does not} have a density (with respect to some reference measure).

On Invariance in Hierarchical Models

no code implementations NeurIPS 2009 Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio

A goal of central importance in the study of hierarchical models for object recognition -- and indeed the visual cortex -- is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.