no code implementations • 21 Apr 2024 • Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang
With the rise of Machine Learning as a Service (MLaaS) platforms, safeguarding the intellectual property of deep learning models is becoming paramount.
no code implementations • 20 Feb 2024 • Fangqi Li, Haodong Zhao, Wei Du, Shilin Wang
To trace the copyright of deep neural networks, an owner can embed its identity information into its model as a watermark.
1 code implementation • 18 Jan 2024 • Tongxin Yuan, Zhiwei He, Lingzhong Dong, Yiming Wang, Ruijie Zhao, Tian Xia, Lizhen Xu, Binglin Zhou, Fangqi Li, Zhuosheng Zhang, Rui Wang, Gongshen Liu
We introduce R-Judge, a benchmark crafted to evaluate the proficiency of LLMs in judging and identifying safety risks given agent interaction records.
no code implementations • 25 Aug 2022 • Haodong Zhao, Wei Du, Fangqi Li, Peixuan Li, Gongshen Liu
In this paper, we propose "FedPrompt" to study prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0. 01% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution.
no code implementations • 9 Apr 2022 • Fangqi Li, Shilin Wang
To confront these challenges, we propose a knowledge-free black-box watermarking scheme for image classification neural networks.
1 code implementation • 31 Oct 2021 • Runbo Ni, Xueyan Li, Fangqi Li, Xiaofeng Gao, Guihai Chen
Finding influential users in social networks is a fundamental problem with many possible useful applications.