1 code implementation • 21 Aug 2023 • Shuo Zhang, Ziruo Wang, Zikai Zhou, Huanran Chen
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
1 code implementation • 7 Aug 2023 • Zikai Zhou, Shuo Zhang, Ziruo Wang, Huanran Chen
The success of deep learning is inseparable from normalization layers.