1 code implementation • 22 Jun 2023 • Junjia Liu, Zhihao LI, WanYu Lin, Sylvain Calinon, Kay Chen Tan, Fei Chen
Soft object manipulation tasks in domestic scenes pose a significant challenge for existing robotic skill learning techniques due to their complex dynamics and variable shape characteristics.
no code implementations • 10 Jun 2022 • Suhan Shetty, Teguh Lembono, Tobias Loew, Sylvain Calinon
We treat the task parameters as random variables, and for a given task, we generate samples for decision variables from the conditional distribution to initialize the optimization solver.
1 code implementation • 12 May 2022 • Junjia Liu, Yiting Chen, Zhipeng Dong, Shixiong Wang, Sylvain Calinon, Miao Li, Fei Chen
This letter describes an approach to achieve well-known Chinese cooking art stir-fry on a bimanual robot system.
no code implementations • 2 Mar 2022 • Boyang Ti, Yongsheng Gao, Jie Zhao, Sylvain Calinon
Daily manipulation tasks are characterized by geometric primitives related to actions and object shapes.
no code implementations • 21 Apr 2021 • Sylvain Calinon
This chapter presents an overview of techniques used for the analysis, edition, and synthesis of time series, with a particular emphasis on motion data.
1 code implementation • 12 Jan 2021 • Suhan Shetty, João Silvério, Sylvain Calinon
In robotics, ergodic control extends the tracking principle by specifying a probability distribution over an area to cover instead of a trajectory to track.
Robotics Systems and Control Systems and Control Dynamical Systems Optimization and Control Applications
1 code implementation • 11 Nov 2020 • Teguh Santoso Lembono, Emmanuel Pignat, Julius Jankowski, Sylvain Calinon
We propose a generative adversarial network approach to learn the distribution of valid robot configurations under such constraints.
no code implementations • 7 Oct 2020 • Emmanuel Pignat, João Silvério, Sylvain Calinon
In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover tasks that are masked by the resolution of higher-level objectives.
no code implementations • 1 Jul 2020 • Martin Troussard, Emmanuel Pignat, Parameswaran Kamalaruban, Sylvain Calinon, Volkan Cevher
This paper proposes an inverse reinforcement learning (IRL) framework to accelerate learning when the learner-teacher \textit{interaction} is \textit{limited} during training.
no code implementations • 31 Jan 2020 • Antonio Paolillo, Teguh Santoso Lembono, Sylvain Calinon
This paper addresses the problem of efficiently achieving visual predictive control tasks.
no code implementations • 11 Oct 2019 • Noémie Jaquier, Leonel Rozo, Sylvain Calinon, Mathias Bürger
Bayesian optimization (BO) recently became popular in robotics to optimize control parameters and parametric policies in direct reinforcement learning due to its data efficiency and gradient-free approach.
no code implementations • 11 Oct 2019 • Noémie Jaquier, David Ginsbourger, Sylvain Calinon
In learning from demonstrations, it is often desirable to adapt the behavior of the robot as a function of the variability retrieved from human demonstrations and the (un)certainty encoded in different parts of the task.
no code implementations • 12 Sep 2019 • Sylvain Calinon
This article presents an overview of robot learning and adaptive control applications that can benefit from a joint use of Riemannian geometry and probabilistic representations.
1 code implementation • 28 Feb 2019 • Noémie Jaquier, Robert Haschke, Sylvain Calinon
The proposed formulation takes into account the underlying structure of the data and remains efficient when few training data are available.
no code implementations • 6 Jul 2018 • Konstantinos Chatzilygeroudis, Vassilis Vassiliades, Freek Stulp, Sylvain Calinon, Jean-Baptiste Mouret
Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot.
no code implementations • 19 Dec 2017 • João Silvério, Yanlong Huang, Leonel Rozo, Sylvain Calinon, Darwin G. Caldwell
When learning skills from demonstrations, one is often required to think in advance about the appropriate task representation (usually in either operational or configuration space).