1 code implementation • 18 Sep 2024 • Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, XiaoYun Wang
However, this goal is not achieved when neural networks operate under a hard-label setting where the raw output is inaccessible.
1 code implementation • 13 Jun 2024 • Delong Ran, JinYuan Liu, Yichen Gong, Jingyi Zheng, Xinlei He, Tianshuo Cong, Anyu Wang
Jailbreak attacks aim to induce Large Language Models (LLMs) to generate harmful responses for forbidden instructions, presenting severe misuse threats to LLMs.
1 code implementation • 8 Apr 2024 • Tianshuo Cong, Delong Ran, Zesen Liu, Xinlei He, JinYuan Liu, Yichen Gong, Qi Li, Anyu Wang, XiaoYun Wang
Model merging is a promising lightweight model empowerment technique that does not rely on expensive computing devices (e. g., GPUs) or require the collection of specific training data.
1 code implementation • 9 Nov 2023 • Yichen Gong, Delong Ran, JinYuan Liu, Conglei Wang, Tianshuo Cong, Anyu Wang, Sisi Duan, XiaoYun Wang
Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated.
1 code implementation • 21 Jan 2017 • Anyu Wang, Zhifang Zhang, Dongdai Lin
For binary $[n, k, d]$ linear locally repairable codes (LRCs), two new upper bounds on $k$ are derived.
Information Theory Information Theory