Search Results for author: Futa Waseda

Found 4 papers, 1 papers with code

Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off

no code implementations22 Feb 2024 Futa Waseda, Isao Echizen

Although adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs), they suffer from a robustness-accuracy trade-off.

Knowledge Distillation Self-Supervised Learning

Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection

no code implementations27 Sep 2023 Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen

To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.

Data Augmentation Human Detection

Beyond In-Domain Scenarios: Robust Density-Aware Calibration

1 code implementation10 Feb 2023 Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers

While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios.

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

no code implementations29 Dec 2021 Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen

Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.

Cannot find the paper you are looking for? You can Submit a new open access paper.