Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer

7 Sep 2018  ·  Alvin Chan, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Yang Liu, Yew Soon Ong ·

Deep neural networks (DNN), while becoming the driving force of many novel technology and achieving tremendous success in many cutting-edge applications, are still vulnerable to adversarial attacks. Differentiable neural computer (DNC) is a novel computing machine with DNN as its central controller operating on an external memory module for data processing. The unique architecture of DNC contributes to its state-of-the-art performance in tasks which requires the ability to represent variables and data structure as well as to store data over long timescales. However, there still lacks a comprehensive study on how adversarial examples affect DNC in terms of robustness. In this paper, we propose metamorphic relation based adversarial techniques for a range of tasks described in the natural processing language domain. We show that the near-perfect performance of the DNC in bAbI logical question answering tasks can be degraded by adversarially injected sentences. We further perform in-depth study on the role of DNC's memory size in its robustness and analyze the potential reason causing why DNC fails. Our study demonstrates the current challenges and potential opportunities towards constructing more robust DNCs.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here