Balancing and push-recovery are essential capabilities enabling humanoid robots to solve complex locomotion tasks.
Among these, computing the inverse kinematics of a redundant robot arm poses a significant challenge due to the non-linear structure of the robot, the hard joint constraints and the non-invertible kinematics map.
These methods have important limitations for robotics: Learning solely on off-line data may introduce biases (the so-called domain shift), and prevents adaptation to novel tasks.
In this thesis, we focus on kernel methods, a theoretically sound and effective class of learning algorithms yielding nonparametric estimators.
In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.
We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.
We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.
Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines.
We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.