Search Results for author: Siyue Wang

Found 16 papers, 5 papers with code

AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks

no code implementations2 Mar 2024 Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li

Large language models (LLMs) have demonstrated impressive results on natural language tasks, and security researchers are beginning to employ them in both offensive and defensive systems.

Computer Security Language Modelling +1

Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning

no code implementations30 Jan 2024 Chenan Wang, Pu Zhao, Siyue Wang, Xue Lin

Deep Neural Network (DNN) models when implemented on executing devices as the inference engines are susceptible to Fault Injection Attacks (FIAs) that manipulate model parameters to disrupt inference execution with disastrous performance.

Contrastive Learning Self-Supervised Learning

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

no code implementations27 Mar 2023 Ruyi Ding, Cheng Gongye, Siyue Wang, Aidong Ding, Yunsi Fei

Inspired by the fact that electromagnetic (EM) emanations of a model inference are dependent on both operations and data and may contain footprints of different input classes, we propose a framework, EMShepherd, to capture EM traces of model execution, perform processing on traces and exploit them for adversarial detection.

SANA: Cross-Species Prediction of Gene Ontology GO Annotations via Topological Network Alignment

1 code implementation26 Apr 2022 Siyue Wang, Giles R. S. Atkinson, Wayne B. Hayes

We argue that this failure of topology alone is due to the sparsity and incompleteness of the PPI network data of almost all species, which provides the network topology with a small signal-to-noise ratio that is effectively swamped when sequence information is added to the mix.

Semantic Similarity Semantic Textual Similarity

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

1 code implementation NeurIPS 2021 Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.

High-Robustness, Low-Transferability Fingerprinting of Neural Networks

no code implementations14 May 2021 Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao, Xue Lin

This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models.

Vocal Bursts Intensity Prediction

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

no code implementations19 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.

Malware Detection Self-Driving Cars

Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent

1 code implementation18 Feb 2020 Pu Zhao, Pin-Yu Chen, Siyue Wang, Xue Lin

Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability.

Adversarial Attack Image Classification

Block Switching: A Stochastic Approach for Deep Learning Security

no code implementations18 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

1 code implementation20 Aug 2019 Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin

However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.

Adversarial Robustness

Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

no code implementations28 May 2019 Pu Zhao, Siyue Wang, Cheng Gongye, Yanzhi Wang, Yunsi Fei, Xue Lin

Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability. We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters.

Overall - Test

E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs

no code implementations12 Dec 2018 Zhe Li, Caiwen Ding, Siyue Wang, Wujie Wen, Youwei Zhuo, Chang Liu, Qinru Qiu, Wenyao Xu, Xue Lin, Xuehai Qian, Yanzhi Wang

It is a challenging task to have real-time, efficient, and accurate hardware RNN implementations because of the high sensitivity to imprecision accumulation and the requirement of special activation function implementations.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

no code implementations13 Sep 2018 Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin

Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model and the attacker's strategy for generating adversarial examples. We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout.

Cannot find the paper you are looking for? You can Submit a new open access paper.