RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems

11 Nov 2019  ·  Jianmin Guo, Yue Zhao, Quan Zhang, Yu Jiang ·

While massive efforts have been investigated in adversarial testing of convolutional neural networks (CNN), testing for recurrent neural networks (RNN) is still limited and leaves threats for vast sequential application domains. In this paper, we propose an adversarial testing framework RNN-Test for RNN systems, focusing on the main sequential domains, not only classification tasks. First, we design a novel search methodology customized for RNN models by maximizing the inconsistency of RNN states to produce adversarial inputs. Next, we introduce two state-based coverage metrics according to the distinctive structure of RNNs to explore more inference logics. Finally, RNN-Test solves the joint optimization problem to maximize state inconsistency and state coverage, and crafts adversarial inputs for various tasks of different kinds of inputs. For evaluations, we apply RNN-Test on three sequential models of common RNN structures. On the tested models, the RNN-Test approach is demonstrated to be competitive in generating adversarial inputs, outperforming FGSM-based and DLFuzz-based methods to reduce the model performance more sharply with 2.78% to 32.5% higher success (or generation) rate. RNN-Test could also achieve 52.65% to 66.45% higher adversary rate on MNIST-LSTM model than relevant work testRNN. Compared with the neuron coverage, the proposed state coverage metrics as guidance excel with 4.17% to 97.22% higher success (or generation) rate.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here