no code implementations • 8 Feb 2024 • Xianghe Pang, Shuo Tang, Rui Ye, Yuxin Xiong, Bolun Zhang, Yanfeng Wang, Siheng Chen
Aligning large language models (LLMs) with human values is imperative to mitigate potential adverse effects resulting from their misuse.