Search Results for author: Zhetao Li

Found 3 papers, 1 papers with code

RoMA: Robust Malware Attribution via Byte-level Adversarial Training with Global Perturbations and Adversarial Consistency Regularization

no code implementations11 Feb 2025 Yuxia Sun, Huihong Chen, Jingcai Guo, Aoxiang Sun, Zhetao Li, Haolin Liu

Extensive experiments show that RoMA significantly outperforms seven competing methods in both adversarial robustness (e. g., achieving over 80% robust accuracy-more than twice that of the next-best method under PGD attacks) and training efficiency (e. g., more than twice as fast as the second-best method in terms of accuracy), while maintaining superior standard accuracy in non-adversarial scenarios.

Adversarial Robustness Malware Detection

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

no code implementations21 Apr 2023 Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS).

Federated Learning Model Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.