no code implementations • 22 Feb 2024 • Futa Waseda, Isao Echizen
Although adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs), they suffer from a robustness-accuracy trade-off.
no code implementations • 27 Sep 2023 • Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen
To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.
1 code implementation • 10 Feb 2023 • Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers
While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios.
no code implementations • 29 Dec 2021 • Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.