Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

22 Nov 2019Yantao LuYunhan JiaJianyu WangBai LiWeiheng ChaiLawrence CarinSenem Velipasalar

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although great efforts have been delved into the transferability across models, surprisingly, less attention has been paid to the cross-task transferability, which represents the real-world cybercriminal's situation, where an ensemble of different defense/detection mechanisms need to be evaded all at once... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.