no code implementations • 11 Oct 2024 • Shojiro Yamabe, Tsubasa Takahashi, Futa Waseda, Koki Wataoka
As the cost of training large language models (LLMs) rises, protecting their intellectual property has become increasingly critical.
no code implementations • 29 May 2024 • Futa Waseda, Antonio Tejero-de-Pablos
To this end, this paper studies defense strategies against adversarial attacks on VL models for ITR for the first time.
no code implementations • 22 Feb 2024 • Futa Waseda, Ching-Chun Chang, Isao Echizen
Although adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs), it suffers from a robustness-accuracy trade-off, where high robustness is achieved at the cost of clean accuracy.
no code implementations • 27 Sep 2023 • Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen
To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.
1 code implementation • 10 Feb 2023 • Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers
While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios.
no code implementations • 29 Dec 2021 • Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.