Search Results for author: Francisco Roldan Sanchez

Found 6 papers, 5 papers with code

Dataset Clustering for Improved Offline Policy Learning

1 code implementation14 Feb 2024 Qiang Wang, Yixin Deng, Francisco Roldan Sanchez, Keru Wang, Kevin McGuinness, Noel O'Connor, Stephen J. Redmond

Offline policy learning aims to discover decision-making policies from previously-collected datasets without additional online interactions with the environment.

Clustering Continuous Control +2

Learning and reusing primitive behaviours to improve Hindsight Experience Replay sample efficiency

1 code implementation3 Oct 2023 Francisco Roldan Sanchez, Qiang Wang, David Cordova Bulens, Kevin McGuinness, Stephen Redmond, Noel O'Connor

Hindsight Experience Replay (HER) is a technique used in reinforcement learning (RL) that has proven to be very efficient for training off-policy RL-based agents to solve goal-based robotic manipulation tasks using sparse rewards.

Reinforcement Learning (RL)

Dexterous Robotic Manipulation using Deep Reinforcement Learning and Knowledge Transfer for Complex Sparse Reward-based Tasks

1 code implementation19 May 2022 Qiang Wang, Francisco Roldan Sanchez, Robert McCarthy, David Cordova Bulens, Kevin McGuinness, Noel O'Connor, Manuel Wüthrich, Felix Widmaier, Stefan Bauer, Stephen J. Redmond

Here we extend this method, by modifying the task of Phase 1 of the RRC to require the robot to maintain the cube in a particular orientation, while the cube is moved along the required positional trajectory.

Transfer Learning

Solving the Real Robot Challenge using Deep Reinforcement Learning

2 code implementations30 Sep 2021 Robert McCarthy, Francisco Roldan Sanchez, Qiang Wang, David Cordova Bulens, Kevin McGuinness, Noel O'Connor, Stephen J. Redmond

This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge; a challenge in which a three-fingered robot must carry a cube along specified goal trajectories.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.