Search Results for author: Sheng Wen

Found 8 papers, 3 papers with code

The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices

1 code implementation23 Sep 2022 Wanlun Ma, Derui Wang, Ruoxi Sun, Minhui Xue, Sheng Wen, Yang Xiang

However, recent advanced backdoor attacks show that this assumption is no longer valid in dynamic backdoors where the triggers vary from input to input, thereby defeating the existing defenses.

valid

StyleFool: Fooling Video Classification Systems via Style Transfer

1 code implementation30 Mar 2022 Yuxin Cao, Xi Xiao, Ruoxi Sun, Derui Wang, Minhui Xue, Sheng Wen

In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system.

Adversarial Attack Classification +3

DeFuzz: Deep Learning Guided Directed Fuzzing

no code implementations23 Oct 2020 Xiaogang Zhu, Shigang Liu, Xian Li, Sheng Wen, Jun Zhang, Camtepe Seyit, Yang Xiang

Fuzzing is one of the most effective technique to identify potential software vulnerabilities.

Vulnerability Detection

Analysis of Trending Topics and Text-based Channels of Information Delivery in Cybersecurity

no code implementations26 Jun 2020 Tingmin Wu, Wanlun Ma, Sheng Wen, Xin Xia, Cecile Paris, Surya Nepal, Yang Xiang

We further compare the identified 16 security categories across different sources based on their popularity and impact.

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

no code implementations14 Oct 2019 Derui, Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang

First, such attacks must acquire the outputs from the models by multiple times before actually launching attacks, which is difficult for the MitM adversary in practice.

BIG-bench Machine Learning

Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

no code implementations10 Aug 2018 Xiao Chen, Chaoran Li, Derui Wang, Sheng Wen, Jun Zhang, Surya Nepal, Yang Xiang, Kui Ren

In contrast to existing works, the adversarial examples crafted by our method can also deceive recent machine learning based detectors that rely on semantic features such as control-flow-graph.

Cryptography and Security

Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training

no code implementations14 Mar 2018 Derek Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang

For example, proactive defending methods are invalid against grey-box or white-box attacks, while reactive defending methods are challenged by low-distortion adversarial examples or transferring adversarial examples.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.