no code implementations • 9 Jan 2025 • Emilia Magnani, Ernesto de Vito, Philipp Hennig, Lorenzo Rosasco
We consider the problem of learning convolution operators associated to compact Abelian groups.
no code implementations • 21 Dec 2024 • Hippolyte Labarrière, Cesare Molinari, Lorenzo Rosasco, Silvia Villa, Cristian Vega
Overparameterized models trained with (stochastic) gradient descent are ubiquitous in modern machine learning.
no code implementations • 1 Sep 2024 • Andrea Maracani, Lorenzo Rosasco, Lorenzo Natale
Deep Neural Networks have significantly impacted many computer vision tasks.
no code implementations • 23 May 2024 • Marco Rando, Luca Demetrio, Lorenzo Rosasco, Fabio Roli
Machine learning malware detectors are vulnerable to adversarial EXEmples, i. e. carefully-crafted Windows programs tailored to evade detection.
no code implementations • 26 Apr 2024 • Marco Rando, Martin James, Alessandro Verri, Lorenzo Rosasco, Agnese Seminara
By introducing a temporal memory, we demonstrate that two salient features of odor traces, discretized in few olfactory states, are sufficient to learn navigation in a realistic odor plume.
no code implementations • 13 Mar 2024 • Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna
Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias.
1 code implementation • 5 Mar 2024 • Edoardo Caldarelli, Antoine Chatalic, Adrià Colomé, Cesare Molinari, Carlos Ocampo-Martinez, Carme Torras, Lorenzo Rosasco
In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems.
no code implementations • 25 Feb 2024 • Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale
This study provides a comprehensive benchmark framework for Source-Free Unsupervised Domain Adaptation (SF-UDA) in image classification, aiming to achieve a rigorous empirical understanding of the complex relationships between multiple key design factors in SF-UDA methods.
1 code implementation • 22 Nov 2023 • Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco
In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.
1 code implementation • 2 Nov 2023 • Gabriele M. Caddeo, Andrea Maracani, Paolo D. Alfano, Nicola A. Piga, Lorenzo Rosasco, Lorenzo Natale
Our evaluation is conducted on a dataset of tactile images obtained from a set of ten 3D printed YCB objects.
no code implementations • 22 Oct 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Francesco Locatello
The use of simulated data in the field of causal discovery is ubiquitous due to the scarcity of annotated real data.
1 code implementation • NeurIPS 2023 • Francesco Montagna, Atalanti A. Mastakouri, Elias Eulig, Nicoletta Noceti, Lorenzo Rosasco, Dominik Janzing, Bryon Aragam, Francesco Locatello
When domain knowledge is limited and experimentation is restricted by ethical, financial, or time constraints, practitioners turn to observational causal discovery methods to recover the causal structure, exploiting the statistical properties of their data.
1 code implementation • NeurIPS 2023 • Giacomo Meanti, Antoine Chatalic, Vladimir R. Kostic, Pietro Novelli, Massimiliano Pontil, Lorenzo Rosasco
Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.
no code implementations • 9 Mar 2023 • Gaia Grosso, Nicolò Lai, Marco Letizia, Jacopo Pazzini, Marco Rando, Lorenzo Rosasco, Andrea Wulzer, Marco Zanetti
We here propose a machine learning approach for monitoring particle detectors in real-time.
no code implementations • 10 Feb 2023 • Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale
Fine-tuning and Domain Adaptation emerged as effective strategies for efficiently transferring deep learning models to new target tasks.
no code implementations • 24 Dec 2022 • Vassilis Apidopoulos, Tomaso Poggio, Lorenzo Rosasco, Silvia Villa
In this paper, we focus on iterative regularization in the context of classification.
no code implementations • 4 Dec 2022 • Andrea Della Vecchia, Ernesto de Vito, Lorenzo Rosasco
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.
no code implementations • 16 Sep 2022 • Paolo Didier Alfano, Vito Paolo Pastore, Lorenzo Rosasco, Francesca Odone
In this paper, focusing on image classification, we consider a simple transfer learning approach exploiting pre-trained convolutional features as input for a fast-to-train kernel method.
no code implementations • 14 Sep 2022 • Paolo Didier Alfano, Marco Rando, Marco Letizia, Francesca Odone, Lorenzo Rosasco, Vito Paolo Pastore
We compare our method with state-of-the-art unsupervised approaches, where a set of pre-defined hand-crafted features is used for clustering of plankton images.
no code implementations • 2 Aug 2022 • Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig
Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).
1 code implementation • 27 Jun 2022 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Giacomo Meanti, Lorenzo Rosasco, Lorenzo Natale
In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.
no code implementations • 10 Jun 2022 • Marco Rando, Cesare Molinari, Silvia Villa, Lorenzo Rosasco
For smooth convex functions we prove almost sure convergence of the iterates and a convergence rate on the function values of the form $O( (d/l) k^{-c})$ for every $c<1/2$, which is arbitrarily close to the one of Stochastic Gradient Descent (SGD) in terms of number of iterations.
no code implementations • 31 May 2022 • Pierre Laforgue, Andrea Della Vecchia, Nicolò Cesa-Bianchi, Lorenzo Rosasco
We introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown structure of the tasks.
1 code implementation • 27 May 2022 • Vladimir Kostic, Pietro Novelli, Andreas Maurer, Carlo Ciliberto, Lorenzo Rosasco, Massimiliano Pontil
We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system.
no code implementations • 5 Apr 2022 • Marco Letizia, Gianvito Losapio, Marco Rando, Gaia Grosso, Andrea Wulzer, Maurizio Pierini, Marco Zanetti, Lorenzo Rosasco
We present a machine learning approach for model-independent new physics searches.
no code implementations • 1 Apr 2022 • Daniele Lagomarsino-Oneto, Giacomo Meanti, Nicolò Pagliana, Alessandro Verri, Andrea Mazzino, Lorenzo Rosasco, Agnese Seminara
We train supervised learning algorithms using the past history of wind to predict its value at a future time (horizon).
no code implementations • 16 Mar 2022 • Jaouad Mourtada, Lorenzo Rosasco
In this note, we provide an elementary analysis of the prediction error of ridge regression with random design.
no code implementations • 3 Feb 2022 • Stefano Vigogna, Giacomo Meanti, Ernesto de Vito, Lorenzo Rosasco
We study the behavior of error bounds for multiclass classification under suitable margin conditions.
no code implementations • 1 Feb 2022 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible.
no code implementations • 31 Jan 2022 • Antoine Chatalic, Nicolas Schreuder, Alessandro Rudi, Lorenzo Rosasco
Our main result is an upper bound on the approximation error of this procedure.
no code implementations • 30 Jan 2022 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points.
1 code implementation • 17 Jan 2022 • Giacomo Meanti, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.
1 code implementation • 21 Oct 2021 • Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.
no code implementations • 20 Sep 2021 • Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna
Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.
no code implementations • NeurIPS 2021 • Luigi Carratino, Stefano Vigogna, Daniele Calandriello, Lorenzo Rosasco
We introduce ParK, a new large-scale solver for kernel ridge regression.
no code implementations • 16 Jun 2021 • Nicola Rigolli, Nicodemo Magnoli, Lorenzo Rosasco, Agnese Seminara
Animal behavior and neural recordings show that the brain is able to measure both the intensity of an odor and the timing of odor encounters.
no code implementations • 16 Jun 2021 • Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco
In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.
no code implementations • 9 Jun 2021 • Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data.
no code implementations • 29 Apr 2021 • Diego Ferigo, Raffaello Camoriano, Paolo Maria Viceconte, Daniele Calandriello, Silvio Traversaro, Lorenzo Rosasco, Daniele Pucci
Balancing and push-recovery are essential capabilities enabling humanoid robots to solve complex locomotion tasks.
1 code implementation • 25 Feb 2021 • Gian Maria Marconi, Raffaello Camoriano, Lorenzo Rosasco, Carlo Ciliberto
Among these, computing the inverse kinematics of a redundant robot arm poses a significant challenge due to the non-linear structure of the robot, the hard joint constraints and the non-invertible kinematics map.
no code implementations • 28 Dec 2020 • Elisa Maiettini, Andrea Maracani, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale
We show that the robot can improve adaptation to novel domains, either by interacting with a human teacher (Active Learning) or with an autonomous supervision (Semi-supervised Learning).
1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
This shortens training time while maintaining state-of-the-art performance.
1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.
1 code implementation • ICML 2020 • Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco
Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.
no code implementations • 28 Jun 2020 • Akshay Rangamani, Lorenzo Rosasco, Tomaso Poggio
We study the average $\mbox{CV}_{loo}$ stability of kernel ridge-less regression and derive corresponding risk bounds.
1 code implementation • NeurIPS 2020 • Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size.
no code implementations • 17 Jun 2020 • Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco
We study the learning properties of nonparametric ridge-less least squares.
no code implementations • 17 Jun 2020 • Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.
1 code implementation • 17 Jun 2020 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.
no code implementations • 11 Jun 2020 • Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco
We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.
no code implementations • 28 May 2020 • Gian Maria Marconi, Lorenzo Rosasco, Carlo Ciliberto
Geometric representation learning has recently shown great promise in several machine learning settings, ranging from relational learning to language processing and generative models.
1 code implementation • ICML 2020 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Gaussian processes (GP) are one of the most successful frameworks to model uncertainty.
no code implementations • 22 Feb 2020 • Cristian Rusu, Lorenzo Rosasco
We investigate numerically efficient approximations of eigenspaces associated to symmetric and general matrices.
no code implementations • 13 Feb 2020 • Carlo Ciliberto, Lorenzo Rosasco, Alessandro Rudi
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
no code implementations • NeurIPS 2018 • Daniele Calandriello, Lorenzo Rosasco
We investigate the efficiency of k-means in terms of both statistical and computational requirements.
no code implementations • 18 Jul 2019 • Cristian Rusu, Lorenzo Rosasco
We study the problem of approximating orthogonal matrices so that their application is numerically fast and yet accurate.
no code implementations • 11 Jul 2019 • Nicholas Sterge, Bharath Sriperumbudur, Lorenzo Rosasco, Alessandro Rudi
In this paper, we propose and study a Nystr\"om based approach to efficient large scale kernel principal component analysis (PCA).
no code implementations • 8 Jul 2019 • Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco
Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.
no code implementations • NeurIPS 2019 • Nicolò Pagliana, Lorenzo Rosasco
We study learning properties of accelerated gradient descent methods for linear least-squares in Hilbert spaces.
no code implementations • 27 May 2019 • Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco
We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.
1 code implementation • 13 Mar 2019 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Moreover, we show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where $d_{eff}$ is the effective dimension of the explored space, which is typically much smaller than both $d$ and $t$.
no code implementations • 12 Mar 2019 • Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio
In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.
no code implementations • NeurIPS 2019 • Nicole Mücke, Gergely Neu, Lorenzo Rosasco
While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.
1 code implementation • NeurIPS 2018 • Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco
Leverage score sampling provides an appealing way to perform approximate computations for large matrices.
no code implementations • NeurIPS 2018 • Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms.
no code implementations • NeurIPS 2018 • Alessandro Rudi, Carlo Ciliberto, Gian Maria Marconi, Lorenzo Rosasco
Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure.
1 code implementation • NeurIPS 2018 • Dimitrios Milios, Raffaello Camoriano, Pietro Michiardi, Lorenzo Rosasco, Maurizio Filippone
In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.
no code implementations • 23 Mar 2018 • Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.
no code implementations • 22 Feb 2018 • Gergely Neu, Lorenzo Rosasco
We propose and analyze a variant of the classic Polyak-Ruppert averaging scheme, broadly used in stochastic gradient methods.
no code implementations • 20 Jan 2018 • Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.
no code implementations • 30 Dec 2017 • Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar
In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.
no code implementations • 21 Oct 2017 • Junhong Lin, Lorenzo Rosasco
In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.
1 code implementation • 28 Sep 2017 • Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale
We report on an extensive study of the benefits and limitations of current deep learning approaches to object recognition in robot vision scenarios, introducing a novel dataset used for our investigation.
no code implementations • 18 Jul 2017 • Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco
In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.
no code implementations • 18 Jul 2017 • Simon Matet, Lorenzo Rosasco, Silvia Villa, Bang Long Vu
We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional.
no code implementations • 3 Jul 2017 • Junhong Lin, Lorenzo Rosasco
In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.
4 code implementations • NeurIPS 2017 • Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco
In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points.
no code implementations • NeurIPS 2017 • Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco, Massimiliano Pontil
However, in practice assuming the tasks to be linearly related might be restrictive, and allowing for nonlinear structures is a challenge.
no code implementations • 28 Mar 2017 • Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa
We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or {\L}ojasiewicz properties.
no code implementations • NeurIPS 2016 • Junhong Lin, Lorenzo Rosasco
We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.
no code implementations • 2 Nov 2016 • Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao
The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning.
no code implementations • 28 May 2016 • Junhong Lin, Lorenzo Rosasco
As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).
1 code implementation • 26 May 2016 • Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco
We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.
no code implementations • NeurIPS 2016 • Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco
We propose and analyze a regularization approach for structured prediction problems.
1 code implementation • 17 May 2016 • Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, Giorgio Metta
We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.
1 code implementation • NeurIPS 2017 • Alessandro Rudi, Lorenzo Rosasco
We study the generalization properties of ridge regression with random features in the statistical learning framework.
no code implementations • 18 Jan 2016 • Raffaello Camoriano, Silvio Traversaro, Lorenzo Rosasco, Giorgio Metta, Francesco Nori
This paper presents a novel approach for incremental semiparametric inverse dynamics learning.
1 code implementation • 19 Oct 2015 • Tomas Angles, Raffaello Camoriano, Alessandro Rudi, Lorenzo Rosasco
Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines.
4 code implementations • 16 Oct 2015 • Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio
Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.
Ranked #9 on Link Prediction on FB15k
no code implementations • 23 Sep 2015 • Giulia Pasquale, Tanis Mar, Carlo Ciliberto, Lorenzo Rosasco, Lorenzo Natale
The importance of depth perception in the interactions that humans have within their nearby space is a well established fact.
no code implementations • 5 Aug 2015 • Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, Tomaso Poggio
In i-theory a typical layer of a hierarchical architecture consists of HW modules pooling the dot products of the inputs to the layer with the transformations of a few templates under a group.
1 code implementation • NeurIPS 2015 • Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco
We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.
no code implementations • CVPR 2015 • Carlo Ciliberto, Lorenzo Rosasco, Silvia Villa
Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e. g. object detection, classification, tracking of multiple agents, or denoising, to name a few.
no code implementations • 13 Apr 2015 • Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale
In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot.
1 code implementation • 13 Apr 2015 • Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco
In this context a fundamental question is how to incorporate the tasks structure in the learning problem. We tackle this question by studying a general computational framework that allows to encode a-priori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches.
no code implementations • 31 Mar 2015 • Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.
no code implementations • 19 Mar 2015 • Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio
We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other.
no code implementations • NeurIPS 2013 • Alessandro Rudi, Guille D. Canas, Lorenzo Rosasco
A large number of algorithms in machine learning, from principal component analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral embedding and support estimation methods, rely on estimating a linear subspace from samples.
no code implementations • 16 Jun 2014 • Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo Rosasco, Tomaso Poggio
Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input.
no code implementations • NeurIPS 2015 • Lorenzo Rosasco, Silvia Villa
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method.
no code implementations • 1 Apr 2014 • Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo Rosasco, Tomaso Poggio
We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.
no code implementations • 17 Nov 2013 • Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, Tomaso Poggio
It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.
no code implementations • 15 Jun 2013 • Sean Ryan Fanello, Carlo Ciliberto, Matteo Santoro, Lorenzo Natale, Giorgio Metta, Lorenzo Rosasco, Francesca Odone
In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform.
no code implementations • 24 Mar 2013 • Silvia Villa, Lorenzo Rosasco, Tomaso Poggio
We consider the fundamental question of learnability of a hypotheses class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik.
no code implementations • NeurIPS 2012 • Guillermo Canas, Tomaso Poggio, Lorenzo Rosasco
We study the problem of estimating a manifold from random samples.
no code implementations • NeurIPS 2012 • Guillermo Canas, Lorenzo Rosasco
We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space.
no code implementations • NeurIPS 2012 • Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jeacques Slotine
In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification.
no code implementations • 16 Apr 2012 • Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo
We consider the problem of learning a set from random samples.
4 code implementations • 30 Jun 2011 • Mauricio A. Alvarez, Lorenzo Rosasco, Neil D. Lawrence
Kernel methods are among the most popular techniques in machine learning.
no code implementations • NeurIPS 2010 • Sofia Mosci, Silvia Villa, Alessandro Verri, Lorenzo Rosasco
We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori.
no code implementations • NeurIPS 2010 • Ernesto D. Vito, Lorenzo Rosasco, Alessandro Toigo
In this paper we consider the problem of learning from data the support of a probability distribution when the distribution {\em does not} have a density (with respect to some reference measure).
no code implementations • NeurIPS 2009 • Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio
A goal of central importance in the study of hierarchical models for object recognition -- and indeed the visual cortex -- is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data.