no code implementations • ICCV 2021 • Cheng Yu, Jiansheng Chen, Youze Xue, Yuyang Liu, Weitao Wan, Jiayu Bao, Huimin Ma
Physical-world adversarial attacks based on universal adversarial patches have been proved to be able to mislead deep convolutional neural networks (CNNs), exposing the vulnerability of real-world visual classification systems based on CNNs.
1 code implementation • 26 Dec 2020 • Jiayu Bao
Many adversarial attacks have been proposed to attack image classifiers, but few work shift attention to object detectors.