no code implementations • 24 Oct 2024 • Hengxiang Zhang, Hongfu Gao, Qiang Hu, Guanhua Chen, Lili Yang, BingYi Jing, Hongxin Wei, Bing Wang, Haifeng Bai, Lei Yang
While previous works have introduced several benchmarks to evaluate the safety risk of LLMs, the community still has a limited understanding of current LLMs' capability to recognize illegal and unsafe content in Chinese contexts.
1 code implementation • 27 May 2024 • Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei
In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning.