Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification

1 Dec 2018Qi LeiLingfei WuPin-Yu ChenAlexandros G. DimakisInderjit S. DhillonMichael Witbrock

Adversarial examples are carefully constructed modifications to an input that completely change the output of a classifier but are imperceptible to humans. Despite these successful attacks for continuous data (such as image and audio samples), generating adversarial examples for discrete structures such as text has proven significantly more challenging... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.