1 code implementation • 23 Feb 2024 • Haodong Zhao, Ruifang He, Mengnan Xiao, Jing Xu
First, we leverage parameter-efficient prompt tuning to drive the inputted arguments to match the pre-trained space and realize the approximation with few parameters.
no code implementations • 20 Feb 2024 • Fangqi Li, Haodong Zhao, Wei Du, Shilin Wang
To trace the copyright of deep neural networks, an owner can embed its identity information into its model as a watermark.
1 code implementation • ICCV 2023 • Mingyang Zhang, Xinyi Yu, Haodong Zhao, Linlin Ou
To address the problem of uniform sampling, we propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of subnets.
no code implementations • 16 May 2023 • Wei Du, Peixuan Li, Boqun Li, Haodong Zhao, Gongshen Liu
In this paper, we first summarize the requirements that a more threatening backdoor attack against PLMs should satisfy, and then propose a new backdoor attack method called UOR, which breaks the bottleneck of the previous approach by turning manual selection into automatic optimization.
no code implementations • 25 Aug 2022 • Haodong Zhao, Wei Du, Fangqi Li, Peixuan Li, Gongshen Liu
In this paper, we propose "FedPrompt" to study prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0. 01% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution.