no code implementations • 26 Sep 2024 • Yunpeng Gong, Qingyuan Zeng, Dejun Xu, Zhenzhong Wang, Min Jiang
In recent years, despite significant advancements in adversarial attack research, the security challenges in cross-modal scenarios, such as the transferability of adversarial attacks between infrared, thermal, and RGB images, have been overlooked.
no code implementations • 16 Aug 2024 • Qingyuan Zeng, Zhenzhong Wang, Yiu-ming Cheung, Min Jiang
\textit{Attack} uses an evolutionary algorithm to attack the crucial regions, where the attacks are semantically related to the target texts of \textit{Ask}, thus achieving targeted attacks without semantic loss.
no code implementations • 18 Jul 2024 • Qingyuan Zeng, Yunpeng Gong, Min Jiang
Studying adversarial attacks on artificial intelligence (AI) systems helps discover model shortcomings, enabling the construction of a more robust system.
no code implementations • 19 Apr 2024 • Zhenzhong Wang, Qingyuan Zeng, WanYu Lin, Min Jiang, Kay Chen Tan
While graph neural networks (GNNs) have become the de-facto standard for graph-based node classification, they impose a strong assumption on the availability of sufficient labeled samples.