Search Results for author: Yuanzhao Zhai

Found 6 papers, 0 papers with code

COPR: Continual Human Preference Learning via Optimal Policy Regularization

no code implementations22 Feb 2024 Han Zhang, Lin Gui, Yu Lei, Yuanzhao Zhai, Yehong Zhang, Yulan He, Hui Wang, Yue Yu, Kam-Fai Wong, Bin Liang, Ruifeng Xu

Reinforcement Learning from Human Feedback (RLHF) is commonly utilized to improve the alignment of Large Language Models (LLMs) with human preferences.

Continual Learning

Uncertainty-Penalized Reinforcement Learning from Human Feedback with Diverse Reward LoRA Ensembles

no code implementations30 Dec 2023 Yuanzhao Zhai, Han Zhang, Yu Lei, Yue Yu, Kele Xu, Dawei Feng, Bo Ding, Huaimin Wang

Reinforcement learning from human feedback (RLHF) emerges as a promising paradigm for aligning large language models (LLMs).

Uncertainty Quantification

COPR: Continual Learning Human Preference through Optimal Policy Regularization

no code implementations24 Oct 2023 Han Zhang, Lin Gui, Yuanzhao Zhai, Hui Wang, Yu Lei, Ruifeng Xu

The technique of Reinforcement Learning from Human Feedback (RLHF) is a commonly employed method to improve pre-trained Language Models (LM), enhancing their ability to conform to human preferences.

Continual Learning reinforcement-learning

Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning

no code implementations24 Aug 2022 Zijian Gao, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang

Our method involves training a self-supervised prediction model, saving snapshots of the model parameters, and using nuclear norm to evaluate the temporal inconsistency between the predictions of different snapshots as intrinsic rewards.

reinforcement-learning Reinforcement Learning (RL)

Dynamic Memory-based Curiosity: A Bootstrap Approach for Exploration

no code implementations24 Aug 2022 Zijian Gao, Yiying Li, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang

The curiosity arouses if memorized information can not deal with the current state, and the information gap between dual learners can be formulated as the intrinsic reward for agents, and then such state information can be consolidated into the dynamic memory.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.