no code implementations • 29 Jun 2023 • Wanming Yu, Chuanyu Yang, Christopher McGreavy, Eleftherios Triantafyllidis, Guillaume Bellegarda, Milad Shafiee, Auke Jan Ijspeert, Zhibin Li
Robot motor skills can be learned through deep reinforcement learning (DRL) by neural networks as state-action mappings.
1 code implementation • 10 May 2023 • Can Pu, Chuanyu Yang, Jinnian Pu, Radim Tylecek, Robert B. Fisher
Next, the refined disparity maps are converted into full-view point clouds or single-view point clouds for the pose fusion module.
no code implementations • 9 Feb 2023 • Can Pu, Chuanyu Yang, Jinnian Pu, Robert B. Fisher
More specifically, in the automation stage, the robot navigates to the specified location without the need of a precise parking.
no code implementations • 10 Dec 2020 • Chuanyu Yang, Kai Yuan, Qiuguo Zhu, Wanming Yu, Zhibin Li
Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations.
no code implementations • 15 Feb 2020 • Zhaole Sun, Kai Yuan, Wenbin Hu, Chuanyu Yang, Zhibin Li
In robotic grasping, objects are often occluded in ungraspable configurations such that no pregrasp pose can be found, eg large flat boxes on the table that can only be grasped from the side.
Robotics
no code implementations • 11 Feb 2020 • Wenbin Hu, Chuanyu Yang, Kai Yuan, Zhibin Li
The performance of learned policy is evaluated on three different tasks: grasping a static target, grasping a dynamic target, and re-grasping.
Robotics
no code implementations • 7 Feb 2020 • Chuanyu Yang, Kai Yuan, Wolfgang Merkt, Taku Komura, Sethu Vijayakumar, Zhibin Li
This paper presents a hierarchical framework for Deep Reinforcement Learning that acquires motor skills for a variety of push recovery and balancing behaviors, i. e., ankle, hip, foot tilting, and stepping strategies.
no code implementations • 8 Oct 2017 • Doo Re Song, Chuanyu Yang, Christopher McGreavy, Zhibin Li
This paper presents a deep learning framework that is capable of solving partially observable locomotion tasks based on our novel interpretation of Recurrent Deterministic Policy Gradient (RDPG).