no code implementations • 21 May 2025 • Xiaoyun Zhang, Jingqing Ruan, Xing Ma, Yawen Zhu, Haodong Zhao, Hao Li, Jiansong Chen, Ke Zeng, Xunliang Cai
Large reasoning models (LRMs) achieve remarkable performance via long reasoning chains, but often incur excessive computational overhead due to redundant reasoning, especially on simple tasks.
no code implementations • 13 May 2025 • Haodong Zhao, Peng Peng, Chiyu Chen, Linqing Huang, Gongshen Liu
To address this gap, we propose a realistic federated RS dataset, termed FedRS.
1 code implementation • 3 Mar 2025 • Tianjie Ju, Yi Hua, Hao Fei, Zhenyu Shao, Yubin Zheng, Haodong Zhao, Mong-Li Lee, Wynne Hsu, Zhuosheng Zhang, Gongshen Liu
Multi-Modal Large Language Models (MLLMs) have exhibited remarkable performance on various vision-language tasks such as Visual Question Answering (VQA).
no code implementations • 12 Feb 2025 • Zhaomin Wu, Zhen Qin, Junyi Hou, Haodong Zhao, Qinbin Li, Bingsheng He, Lixin Fan
Based on these observations, we outline key research directions aimed at bridging the gap between current VFL research and real-world applications.
1 code implementation • 16 Oct 2024 • Haodong Zhao, Jinming Hu, Peixuan Li, Fangqi Li, Jinrui Sha, Tianjie Ju, Peixuan Chen, Zhuosheng Zhang, Gongshen Liu
Language models (LMs) have emerged as critical intellectual property (IP) assets that necessitate protection.
1 code implementation • 10 Jul 2024 • Tianjie Ju, Yiting Wang, Xinbei Ma, Pengzhou Cheng, Haodong Zhao, Yulong Wang, Lifeng Liu, Jian Xie, Zhuosheng Zhang, Gongshen Liu
The rapid adoption of large language models (LLMs) in multi-agent systems has highlighted their impressive capabilities in various applications, such as collaborative problem-solving and autonomous negotiation.
1 code implementation • 23 Feb 2024 • Haodong Zhao, Ruifang He, Mengnan Xiao, Jing Xu
First, we leverage parameter-efficient prompt tuning to drive the inputted arguments to match the pre-trained space and realize the approximation with few parameters.
no code implementations • 20 Feb 2024 • Fangqi Li, Haodong Zhao, Wei Du, Shilin Wang
To trace the copyright of deep neural networks, an owner can embed its identity information into its model as a watermark.
1 code implementation • ICCV 2023 • Mingyang Zhang, Xinyi Yu, Haodong Zhao, Linlin Ou
To address the problem of uniform sampling, we propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of subnets.
no code implementations • 16 May 2023 • Wei Du, Peixuan Li, Boqun Li, Haodong Zhao, Gongshen Liu
In this paper, we first summarize the requirements that a more threatening backdoor attack against PLMs should satisfy, and then propose a new backdoor attack method called UOR, which breaks the bottleneck of the previous approach by turning manual selection into automatic optimization.
no code implementations • 25 Aug 2022 • Haodong Zhao, Wei Du, Fangqi Li, Peixuan Li, Gongshen Liu
In this paper, we propose "FedPrompt" to study prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0. 01% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution.