Search Results for author: Donghai Hong

Found 2 papers, 0 papers with code

PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference

no code implementations20 Jun 2024 Jiaming Ji, Donghai Hong, Borong Zhang, Boyuan Chen, Josef Dai, Boren Zheng, Tianyi Qiu, Boxun Li, Yaodong Yang

In this work, we introduce the PKU-SafeRLHF dataset, designed to promote research on safety alignment in large language models (LLMs).

Question Answering Safety Alignment

Aligner: Efficient Alignment by Learning to Correct

no code implementations4 Feb 2024 Jiaming Ji, Boyuan Chen, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Juntao Dai, Tianyi Qiu, Yaodong Yang

However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessitates the development of a model-agnostic alignment approach that can operate under these constraints.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.