1 code implementation • 26 Jan 2024 • Zifan Wu, Bo Tang, Qian Lin, Chao Yu, Shangqin Mao, Qianlong Xie, Xingxing Wang, Dong Wang
Results on benchmark tasks show that our method not only achieves an asymptotic performance comparable to state-of-the-art on-policy methods while using much fewer samples, but also significantly reduces constraint violation during training.
1 code implementation • 4 Jan 2024 • Qian Lin, Chao Yu, Zongkai Liu, Zifan Wu
In this paper, we aim to utilize only offline trajectory data to train a policy for multi-objective RL.
1 code implementation • 1 Jun 2023 • Qian Lin, Bo Tang, Zifan Wu, Chao Yu, Shangqin Mao, Qianlong Xie, Xingxing Wang, Dong Wang
Aiming at promoting the safe real-world deployment of Reinforcement Learning (RL), research on safe RL has made significant progress in recent years.
1 code implementation • 20 Jan 2023 • Zifan Wu, Chao Yu, Chen Chen, Jianye Hao, Hankz Hankui Zhuo
In Model-based Reinforcement Learning (MBRL), model learning is critical since an inaccurate model can bias policy learning via generating misleading samples.
1 code implementation • NeurIPS 2021 • Zifan Wu, Chao Yu, Deheng Ye, Junge Zhang, Haiyin Piao, Hankz Hankui Zhuo
We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting.