1 code implementation • 14 Feb 2024 • Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang
Large Language Models (LLMs) rely on Human Preference Alignment (HPA) to ensure the generation of safe content.
2 code implementations • 22 May 2023 • Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang
Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.