Search Results for author: Haoxin Lin

Found 3 papers, 3 papers with code

Episodic Return Decomposition by Difference of Implicitly Assigned Sub-Trajectory Reward

1 code implementation17 Dec 2023 Haoxin Lin, Hongqiu Wu, Jiaji Zhang, Yihao Sun, Junyin Ye, Yang Yu

Real-world decision-making problems are usually accompanied by delayed rewards, which affects the sample efficiency of Reinforcement Learning, especially in the extremely delayed case where the only feedback is the episodic reward obtained at the end of an episode.

Decision Making

Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning

2 code implementations PMLR 2023 Yihao Sun, Jiaji Zhang, Chengxing Jia, Haoxin Lin, Junyin Ye, Yang Yu

MOBILE conducts uncertainty quantification through the inconsistency of Bellman estimations under an ensemble of learned dynamics models, which can be a better approximator to the true Bellman error, and penalizes the Bellman estimation based on this uncertainty.

D4RL Offline RL +3

Model-based Reinforcement Learning with Multi-step Plan Value Estimation

1 code implementation12 Sep 2022 Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu

The new model-based reinforcement learning algorithm MPPVE (Model-based Planning Policy Learning with Multi-step Plan Value Estimation) shows a better utilization of the learned model and achieves a better sample efficiency than state-of-the-art model-based RL approaches.

Model-based Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.