Search Results for author: Chuanyu Yang

Found 8 papers, 1 papers with code

Identifying Important Sensory Feedback for Learning Locomotion Skills

no code implementations29 Jun 2023 Wanming Yu, Chuanyu Yang, Christopher McGreavy, Eleftherios Triantafyllidis, Guillaume Bellegarda, Milad Shafiee, Auke Jan Ijspeert, Zhibin Li

Robot motor skills can be learned through deep reinforcement learning (DRL) by neural networks as state-action mappings.

A Multi-modal Garden Dataset and Hybrid 3D Dense Reconstruction Framework Based on Panoramic Stereo Images for a Trimming Robot

1 code implementation10 May 2023 Can Pu, Chuanyu Yang, Jinnian Pu, Radim Tylecek, Robert B. Fisher

Next, the refined disparity maps are converted into full-view point clouds or single-view point clouds for the pose fusion module.

Multi-expert learning of adaptive legged locomotion

no code implementations10 Dec 2020 Chuanyu Yang, Kai Yuan, Qiuguo Zhu, Wanming Yu, Zhibin Li

Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations.

Learning Pregrasp Manipulation of Objects from Ungraspable Poses

no code implementations15 Feb 2020 Zhaole Sun, Kai Yuan, Wenbin Hu, Chuanyu Yang, Zhibin Li

In robotic grasping, objects are often occluded in ungraspable configurations such that no pregrasp pose can be found, eg large flat boxes on the table that can only be grasped from the side.

Robotics

Reaching, Grasping and Re-grasping: Learning Fine Coordinated Motor Skills

no code implementations11 Feb 2020 Wenbin Hu, Chuanyu Yang, Kai Yuan, Zhibin Li

The performance of learned policy is evaluated on three different tasks: grasping a static target, grasping a dynamic target, and re-grasping.

Robotics

Learning Whole-body Motor Skills for Humanoids

no code implementations7 Feb 2020 Chuanyu Yang, Kai Yuan, Wolfgang Merkt, Taku Komura, Sethu Vijayakumar, Zhibin Li

This paper presents a hierarchical framework for Deep Reinforcement Learning that acquires motor skills for a variety of push recovery and balancing behaviors, i. e., ankle, hip, foot tilting, and stepping strategies.

Recurrent Deterministic Policy Gradient Method for Bipedal Locomotion on Rough Terrain Challenge

no code implementations8 Oct 2017 Doo Re Song, Chuanyu Yang, Christopher McGreavy, Zhibin Li

This paper presents a deep learning framework that is capable of solving partially observable locomotion tasks based on our novel interpretation of Recurrent Deterministic Policy Gradient (RDPG).

Cannot find the paper you are looking for? You can Submit a new open access paper.