Search Results for author: Shangqun Yu

Found 4 papers, 0 papers with code

Hierarchical Reinforcement Learning of Locomotion Policies in Response to Approaching Objects: A Preliminary Study

no code implementations20 Mar 2022 Shangqun Yu, Sreehari Rammohan, Kaiyu Zheng, George Konidaris

Animals such as rabbits and birds can instantly generate locomotion behavior in reaction to a dynamic, approaching object, such as a person or a rock, despite having possibly never seen the object before and having limited perception of the object's properties.

Hierarchical Reinforcement Learning reinforcement-learning

Learning Generalizable Behavior via Visual Rewrite Rules

no code implementations9 Dec 2021 Yiheng Xie, Mingxuan Li, Shangqun Yu, Michael Littman

Though deep reinforcement learning agents have achieved unprecedented success in recent years, their learned policies can be brittle, failing to generalize to even slight modifications of their environments or unfamiliar situations.

Bayesian Exploration for Lifelong Reinforcement Learning

no code implementations29 Sep 2021 Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris

A central question in reinforcement learning (RL) is how to leverage prior knowledge to accelerate learning in new tasks.

reinforcement-learning

Value-Based Reinforcement Learning for Continuous Control Robotic Manipulation in Multi-Task Sparse Reward Settings

no code implementations28 Jul 2021 Sreehari Rammohan, Shangqun Yu, Bowen He, Eric Hsiung, Eric Rosen, Stefanie Tellex, George Konidaris

Learning continuous control in high-dimensional sparse reward settings, such as robotic manipulation, is a challenging problem due to the number of samples often required to obtain accurate optimal value and policy estimates.

Continuous Control Data Augmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.