Search Results for author: Lorenzo Rosasco

Found 108 papers, 28 papers with code

Neural reproducing kernel Banach spaces and representer theorems for deep networks

no code implementations13 Mar 2024 Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias.

Inductive Bias

Linear quadratic control of nonlinear systems with Koopman operator learning and the Nyström method

1 code implementation5 Mar 2024 Edoardo Caldarelli, Antoine Chatalic, Adrià Colomé, Cesare Molinari, Carlos Ocampo-Martinez, Carme Torras, Lorenzo Rosasco

In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems.

Operator learning

Key Design Choices in Source-Free Unsupervised Domain Adaptation: An In-depth Empirical Analysis

no code implementations25 Feb 2024 Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale

This study provides a comprehensive benchmark framework for Source-Free Unsupervised Domain Adaptation (SF-UDA) in image classification, aiming to achieve a rigorous empirical understanding of the complex relationships between multiple key design factors in SF-UDA methods.

Image Classification Unsupervised Domain Adaptation

Efficient Numerical Integration in Reproducing Kernel Hilbert Spaces via Leverage Scores Sampling

1 code implementation22 Nov 2023 Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco

In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.

Numerical Integration

Shortcuts for causal discovery of nonlinear models by score matching

no code implementations22 Oct 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Francesco Locatello

The use of simulated data in the field of causal discovery is ubiquitous due to the scarcity of annotated real data.

Causal Discovery

Scalable Causal Discovery with Score Matching

no code implementations6 Apr 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello

This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models.

Causal Discovery

Causal Discovery with Score Matching on Additive Models with Arbitrary Noise

no code implementations6 Apr 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello

Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.

Additive models Causal Discovery

Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation

no code implementations10 Feb 2023 Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale

Fine-tuning and Domain Adaptation emerged as effective strategies for efficiently transferring deep learning models to new target tasks.

Unsupervised Domain Adaptation

Regularized ERM on random subspaces

no code implementations4 Dec 2022 Andrea Della Vecchia, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

Computational Efficiency

Top-Tuning: a study on transfer learning for an efficient alternative to fine tuning for image classification with fast kernel methods

no code implementations16 Sep 2022 Paolo Didier Alfano, Vito Paolo Pastore, Lorenzo Rosasco, Francesca Odone

In this paper, focusing on image classification, we consider a simple transfer learning approach exploiting pre-trained convolutional features as input for a fast-to-train kernel method.

Image Classification Transfer Learning

Efficient Unsupervised Learning for Plankton Images

no code implementations14 Sep 2022 Paolo Didier Alfano, Marco Rando, Marco Letizia, Francesca Odone, Lorenzo Rosasco, Vito Paolo Pastore

We compare our method with state-of-the-art unsupervised approaches, where a set of pre-defined hand-crafted features is used for clustering of plankton images.

Clustering

Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs

no code implementations2 Aug 2022 Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig

Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).

Gaussian Processes Uncertainty Quantification

Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot

1 code implementation27 Jun 2022 Federico Ceola, Elisa Maiettini, Giulia Pasquale, Giacomo Meanti, Lorenzo Rosasco, Lorenzo Natale

In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.

Instance Segmentation Segmentation +1

Stochastic Zeroth order Descent with Structured Directions

no code implementations10 Jun 2022 Marco Rando, Cesare Molinari, Silvia Villa, Lorenzo Rosasco

For smooth convex functions we prove almost sure convergence of the iterates and a convergence rate on the function values of the form $O(d/l k^{-c})$ for every $c<1/2$, which is arbitrarily close to the one of Stochastic Gradient Descent (SGD) in terms of number of iterations.

AdaTask: Adaptive Multitask Online Learning

no code implementations31 May 2022 Pierre Laforgue, Andrea Della Vecchia, Nicolò Cesa-Bianchi, Lorenzo Rosasco

We introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown structure of the tasks.

An elementary analysis of ridge regression with random design

no code implementations16 Mar 2022 Jaouad Mourtada, Lorenzo Rosasco

In this note, we provide an elementary analysis of the prediction error of ridge regression with random design.

regression

Iterative regularization for low complexity regularizers

no code implementations1 Feb 2022 Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible.

Nyström Kernel Mean Embeddings

no code implementations31 Jan 2022 Antoine Chatalic, Nicolas Schreuder, Alessandro Rudi, Lorenzo Rosasco

Our main result is an upper bound on the approximation error of this procedure.

Efficient Hyperparameter Tuning for Large Scale Kernel Ridge Regression

1 code implementation17 Jan 2022 Giacomo Meanti, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.

regression

Mean Nyström Embeddings for Adaptive Compressive Learning

1 code implementation21 Oct 2021 Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.

Understanding neural networks with reproducing kernel Banach spaces

no code implementations20 Sep 2021 Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.

Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domains by Adaptive Discretization

no code implementations16 Jun 2021 Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco

In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.

Learning to predict target location with turbulent odor plumes

no code implementations16 Jun 2021 Nicola Rigolli, Nicodemo Magnoli, Lorenzo Rosasco, Agnese Seminara

Animal behavior and neural recordings show that the brain is able to measure both the intensity of an odor and the timing of odor encounters.

From inexact optimization to learning via gradient concentration

no code implementations9 Jun 2021 Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data.

Structured Prediction for CRiSP Inverse Kinematics Learning with Misspecified Robot Models

1 code implementation25 Feb 2021 Gian Maria Marconi, Raffaello Camoriano, Lorenzo Rosasco, Carlo Ciliberto

Among these, computing the inverse kinematics of a redundant robot arm poses a significant challenge due to the non-linear structure of the robot, the hard joint constraints and the non-invertible kinematics map.

Structured Prediction

From Handheld to Unconstrained Object Detection: a Weakly-supervised On-line Learning Approach

no code implementations28 Dec 2020 Elisa Maiettini, Andrea Maracani, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale

We show that the robot can improve adaptation to novel domains, either by interacting with a human teacher (Active Learning) or with an autonomous supervision (Semi-supervised Learning).

Active Learning Line Detection +4

Fast Object Segmentation Learning with Kernel-based Methods for Robotics

1 code implementation25 Nov 2020 Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.

Object Semantic Segmentation

Decentralised Learning with Random Features and Distributed Gradient Descent

1 code implementation ICML 2020 Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco

Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.

For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability

no code implementations28 Jun 2020 Akshay Rangamani, Lorenzo Rosasco, Tomaso Poggio

We study the average $\mbox{CV}_{loo}$ stability of kernel ridge-less regression and derive corresponding risk bounds.

regression

Kernel methods through the roof: handling billions of points efficiently

1 code implementation NeurIPS 2020 Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size.

Interpolation and Learning with Scale Dependent Kernels

no code implementations17 Jun 2020 Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco

We study the learning properties of nonparametric ridge-less least squares.

Regularized ERM on random subspaces

no code implementations17 Jun 2020 Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

Computational Efficiency

Iterative regularization for convex regularizers

1 code implementation17 Jun 2020 Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.

Asymptotics of Ridge (less) Regression under General Source Condition

no code implementations11 Jun 2020 Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco

We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.

regression

Hyperbolic Manifold Regression

no code implementations28 May 2020 Gian Maria Marconi, Lorenzo Rosasco, Carlo Ciliberto

Geometric representation learning has recently shown great promise in several machine learning settings, ranging from relational learning to language processing and generative models.

BIG-bench Machine Learning regression +3

Constructing fast approximate eigenspaces with application to the fast graph Fourier transforms

no code implementations22 Feb 2020 Cristian Rusu, Lorenzo Rosasco

We investigate numerically efficient approximations of eigenspaces associated to symmetric and general matrices.

Statistical and Computational Trade-Offs in Kernel K-Means

no code implementations NeurIPS 2018 Daniele Calandriello, Lorenzo Rosasco

We investigate the efficiency of k-means in terms of both statistical and computational requirements.

Fast approximation of orthogonal matrices and application to PCA

no code implementations18 Jul 2019 Cristian Rusu, Lorenzo Rosasco

We study the problem of approximating orthogonal matrices so that their application is numerically fast and yet accurate.

Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling

no code implementations11 Jul 2019 Nicholas Sterge, Bharath Sriperumbudur, Lorenzo Rosasco, Alessandro Rudi

In this paper, we propose and study a Nystr\"om based approach to efficient large scale kernel principal component analysis (PCA).

Computational Efficiency

Multi-Scale Vector Quantization with Reconstruction Trees

no code implementations8 Jul 2019 Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco

Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.

Quantization

Implicit Regularization of Accelerated Methods in Hilbert Spaces

no code implementations NeurIPS 2019 Nicolò Pagliana, Lorenzo Rosasco

We study learning properties of accelerated gradient descent methods for linear least-squares in Hilbert spaces.

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

no code implementations27 May 2019 Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

1 code implementation13 Mar 2019 Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

Moreover, we show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where $d_{eff}$ is the effective dimension of the explored space, which is typically much smaller than both $d$ and $t$.

Gaussian Processes

Theory III: Dynamics and Generalization in Deep Networks

no code implementations12 Mar 2019 Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio

In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.

Beating SGD Saturation with Tail-Averaging and Minibatching

no code implementations NeurIPS 2019 Nicole Mücke, Gergely Neu, Lorenzo Rosasco

While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.

On Fast Leverage Score Sampling and Optimal Learning

1 code implementation NeurIPS 2018 Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco

Leverage score sampling provides an appealing way to perform approximate computations for large matrices.

regression

Learning with SGD and Random Features

no code implementations NeurIPS 2018 Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco

Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms.

Manifold Structured Prediction

no code implementations NeurIPS 2018 Alessandro Rudi, Carlo Ciliberto, Gian Maria Marconi, Lorenzo Rosasco

Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure.

regression Structured Prediction

Speeding-up Object Detection Training for Robotics with FALKON

no code implementations23 Mar 2018 Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.

object-detection Object Detection +1

Iterate averaging as regularization for stochastic gradient descent

no code implementations22 Feb 2018 Gergely Neu, Lorenzo Rosasco

We propose and analyze a variant of the classic Polyak-Ruppert averaging scheme, broadly used in stochastic gradient methods.

regression

Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces

no code implementations20 Jan 2018 Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher

In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.

regression

Theory of Deep Learning III: explaining the non-overfitting puzzle

no code implementations30 Dec 2017 Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar

In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.

General Classification

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

no code implementations21 Oct 2017 Junhong Lin, Lorenzo Rosasco

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.

regression

Are we done with object recognition? The iCub robot's perspective

1 code implementation28 Sep 2017 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

We report on an extensive study of the benefits and limitations of current deep learning approaches to object recognition in robot vision scenarios, introducing a novel dataset used for our investigation.

Image Retrieval Object +4

Solving $\ell^p\!$-norm regularization with tensor kernels

no code implementations18 Jul 2017 Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco

In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.

Don't relax: early stopping for convex regularization

no code implementations18 Jul 2017 Simon Matet, Lorenzo Rosasco, Silvia Villa, Bang Long Vu

We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional.

Generalization Properties of Doubly Stochastic Learning Algorithms

no code implementations3 Jul 2017 Junhong Lin, Lorenzo Rosasco

In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.

FALKON: An Optimal Large Scale Kernel Method

4 code implementations NeurIPS 2017 Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco

In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points.

Consistent Multitask Learning with Nonlinear Output Relations

no code implementations NeurIPS 2017 Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco, Massimiliano Pontil

However, in practice assuming the tasks to be linearly related might be restrictive, and allowing for nonlinear structures is a challenge.

Structured Prediction

Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry

no code implementations28 Mar 2017 Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa

We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or {\L}ojasiewicz properties.

Optimal Learning for Multi-pass Stochastic Gradient Methods

no code implementations NeurIPS 2016 Junhong Lin, Lorenzo Rosasco

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.

Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

no code implementations2 Nov 2016 Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao

The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning.

Optimal Rates for Multi-pass Stochastic Gradient Methods

no code implementations28 May 2016 Junhong Lin, Lorenzo Rosasco

As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

Generalization Properties and Implicit Regularization for Multiple Passes SGM

1 code implementation26 May 2016 Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.

Incremental Robot Learning of New Objects with Fixed Update Time

1 code implementation17 May 2016 Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, Giorgio Metta

We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.

Active Learning General Classification +2

Generalization Properties of Learning with Random Features

1 code implementation NeurIPS 2017 Alessandro Rudi, Lorenzo Rosasco

We study the generalization properties of ridge regression with random features in the statistical learning framework.

regression

Incremental Semiparametric Inverse Dynamics Learning

no code implementations18 Jan 2016 Raffaello Camoriano, Silvio Traversaro, Lorenzo Rosasco, Giorgio Metta, Francesco Nori

This paper presents a novel approach for incremental semiparametric inverse dynamics learning.

NYTRO: When Subsampling Meets Early Stopping

1 code implementation19 Oct 2015 Tomas Angles, Raffaello Camoriano, Alessandro Rudi, Lorenzo Rosasco

Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines.

Model Selection regression

Holographic Embeddings of Knowledge Graphs

4 code implementations16 Oct 2015 Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.

Knowledge Graphs Link Prediction +1

Deep Convolutional Networks are Hierarchical Kernel Machines

no code implementations5 Aug 2015 Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, Tomaso Poggio

In i-theory a typical layer of a hierarchical architecture consists of HW modules pooling the dot products of the inputs to the layer with the transformations of a few templates under a group.

Less is More: Nyström Computational Regularization

1 code implementation NeurIPS 2015 Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco

We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.

Learning Multiple Visual Tasks while Discovering their Structure

no code implementations CVPR 2015 Carlo Ciliberto, Lorenzo Rosasco, Silvia Villa

Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e. g. object detection, classification, tracking of multiple agents, or denoising, to name a few.

Denoising General Classification +3

Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn?

no code implementations13 Apr 2015 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot.

Image Retrieval Object Recognition +1

Convex Learning of Multiple Tasks and their Structure

1 code implementation13 Apr 2015 Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco

In this context a fundamental question is how to incorporate the tasks structure in the learning problem. We tackle this question by studying a general computational framework that allows to encode a-priori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches.

Multi-Task Learning

Iterative Regularization for Learning with Convex Loss Functions

no code implementations31 Mar 2015 Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.

BIG-bench Machine Learning

On Invariance and Selectivity in Representation Learning

no code implementations19 Mar 2015 Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio

We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other.

Representation Learning

On the Sample Complexity of Subspace Learning

no code implementations NeurIPS 2013 Alessandro Rudi, Guille D. Canas, Lorenzo Rosasco

A large number of algorithms in machine learning, from principal component analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral embedding and support estimation methods, rely on estimating a linear subspace from samples.

Learning An Invariant Speech Representation

no code implementations16 Jun 2014 Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo Rosasco, Tomaso Poggio

Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input.

General Classification Sound Classification +1

Learning with incremental iterative regularization

no code implementations NeurIPS 2015 Lorenzo Rosasco, Silvia Villa

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method.

BIG-bench Machine Learning

A Deep Representation for Invariance And Music Classification

no code implementations1 Apr 2014 Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo Rosasco, Tomaso Poggio

We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.

Classification General Classification +3

Unsupervised Learning of Invariant Representations in Hierarchical Architectures

no code implementations17 Nov 2013 Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, Tomaso Poggio

It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.

Object Recognition speech-recognition +1

iCub World: Friendly Robots Help Building Good Vision Data-Sets

no code implementations15 Jun 2013 Sean Ryan Fanello, Carlo Ciliberto, Matteo Santoro, Lorenzo Natale, Giorgio Metta, Lorenzo Rosasco, Francesca Odone

In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform.

Object Recognition

On Learnability, Complexity and Stability

no code implementations24 Mar 2013 Silvia Villa, Lorenzo Rosasco, Tomaso Poggio

We consider the fundamental question of learnability of a hypotheses class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik.

Multiclass Learning with Simplex Coding

no code implementations NeurIPS 2012 Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jeacques Slotine

In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification.

Binary Classification General Classification

Learning Probability Measures with respect to Optimal Transport Metrics

no code implementations NeurIPS 2012 Guillermo Canas, Lorenzo Rosasco

We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space.

Learning Theory Quantization

Learning Sets with Separating Kernels

no code implementations16 Apr 2012 Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo

We consider the problem of learning a set from random samples.

Spectral Regularization for Support Estimation

no code implementations NeurIPS 2010 Ernesto D. Vito, Lorenzo Rosasco, Alessandro Toigo

In this paper we consider the problem of learning from data the support of a probability distribution when the distribution {\em does not} have a density (with respect to some reference measure).

A Primal-Dual Algorithm for Group Sparse Regularization with Overlapping Groups

no code implementations NeurIPS 2010 Sofia Mosci, Silvia Villa, Alessandro Verri, Lorenzo Rosasco

We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori.

Variable Selection

On Invariance in Hierarchical Models

no code implementations NeurIPS 2009 Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio

A goal of central importance in the study of hierarchical models for object recognition -- and indeed the visual cortex -- is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.