no code implementations • 27 Dec 2023 • Xianyi Chen, Fazhan Liu, Dong Jiang, Kai Yan
Recently, some research show that deep neural networks are vulnerable to the adversarial attacks, the well-trainned samples or patches could be used to trick the neural network detector or human visual perception.