Search Results for author: Kaiqi Xiong

Found 8 papers, 0 papers with code

Advancing DDoS Attack Detection: A Synergistic Approach Using Deep Residual Neural Networks and Synthetic Oversampling

no code implementations6 Jan 2024 Ali Alfatemi, Mohamed Rahouti, Ruhul Amin, Sarah ALJamal, Kaiqi Xiong, Yufeng Xin

In this work, we introduce an enhanced approach for DDoS attack detection by leveraging the capabilities of Deep Residual Neural Networks (ResNets) coupled with synthetic oversampling techniques.

Data Augmentation

Enhancing ML-Based DoS Attack Detection Through Combinatorial Fusion Analysis

no code implementations2 Oct 2023 Evans Owusu, Mohamed Rahouti, D. Frank Hsu, Kaiqi Xiong, Yufeng Xin

Mitigating Denial-of-Service (DoS) attacks is vital for online service security and availability.

Improving Machine Learning Robustness via Adversarial Training

no code implementations22 Sep 2023 Long Dang, Thushari Hapuarachchi, Kaiqi Xiong, Jing Lin

Moreover, in the non-IID data case, the natural accuracy drops from 66. 23% to 57. 82%, and the robust accuracy decreases by 25% and 23. 4% in C&W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively.

Federated Learning

ML Attack Models: Adversarial Attacks and Data Poisoning Attacks

no code implementations6 Dec 2021 Jing Lin, Long Dang, Mohamed Rahouti, Kaiqi Xiong

Many state-of-the-art ML models have outperformed humans in various tasks such as image classification.

Data Poisoning Image Classification

Mahalanobis distance-based robust approaches against false data injection attacks on dynamic power state estimation

no code implementations19 May 2021 Jing Lin, Kaiqi Xiong

Compared to existing approaches, our proposed approaches have three major differences and significant strengths: (1) they defend against the three FDI attacks on dynamic power state estimation rather than static power state estimation, (2) they give a robust estimator that can accurately extract a subset of attack-free sensors for power state estimation, and (3) they adopt the little-known Mahalanobis distance in the consistency check of power sensor measurements, which is different from the Euclidean distance used in all the existing studies on power state estimation.

Active Learning Under Malicious Mislabeling and Poisoning Attacks

no code implementations1 Jan 2021 Jing Lin, Ryan Luley, Kaiqi Xiong

To check the performance of the proposed method under an adversarial setting, i. e., malicious mislabeling and data poisoning attacks, we perform an extensive evaluation on the reduced CIFAR-10 dataset, which contains only two classes: airplane and frog.

Active Learning Data Poisoning +1

An Adversarial Attack Defending System for Securing In-Vehicle Networks

no code implementations25 Aug 2020 Yi Li, Jing Lin, Kaiqi Xiong

In a modern vehicle, there are over seventy Electronics Control Units (ECUs).

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.