Search Results for author: Shanqing Guo

Found 7 papers, 4 papers with code

RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks

no code implementations17 Apr 2023 Yunruo Zhang, Tianyu Du, Shouling Ji, Peng Tang, Shanqing Guo

In this paper, we propose the first certified defense against multi-frame attacks for RNNs called RNN-Guard.

Label Inference Attacks Against Vertical Federated Learning

2 code implementations USENIX Security 22 2022 Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X. Liu, Ting Wang

However, we discover that the bottom model structure and the gradient update mechanism of VFL can be exploited by a malicious participant to gain the power to infer the privately owned labels.

Vertical Federated Learning

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

no code implementations22 Feb 2022 Changjiang Li, Li Wang, Shouling Ji, Xuhong Zhang, Zhaohan Xi, Shanqing Guo, Ting Wang

Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors.

DeepFake Detection Face Swapping

SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems

1 code implementation19 Mar 2021 Yuxuan Chen, Jiangshan Zhang, Xuejing Yuan, Shengzhi Zhang, Kai Chen, XiaoFeng Wang, Shanqing Guo

In this paper, we present our systematization of knowledge for ASR security and provide a comprehensive taxonomy for existing work based on a modularized workflow.

Adversarial Attack Automatic Speech Recognition +3

Learning Symmetric and Asymmetric Steganography via Adversarial Training

no code implementations13 Mar 2019 Zheng Li, Ge Han, Yunqing Wei, Shanqing Guo

Steganography refers to the art of concealing secret messages within multiple media carriers so that an eavesdropper is unable to detect the presence and content of the hidden messages.

How to Prove Your Model Belongs to You: A Blind-Watermark based Framework to Protect Intellectual Property of DNN

1 code implementation5 Mar 2019 Zheng Li, Chengyu Hu, Yang Zhang, Shanqing Guo

To fill these gaps, in this paper, we propose a novel intellectual property protection (IPP) framework based on blind-watermark for watermarking deep neural networks that meet the requirements of security and feasibility.

Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.