Search Results for author: Futa Waseda

Found 6 papers, 1 papers with code

MergePrint: Robust Fingerprinting against Merging Large Language Models

no code implementations11 Oct 2024 Shojiro Yamabe, Tsubasa Takahashi, Futa Waseda, Koki Wataoka

As the cost of training large language models (LLMs) rises, protecting their intellectual property has become increasingly critical.

Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off

no code implementations22 Feb 2024 Futa Waseda, Ching-Chun Chang, Isao Echizen

Although adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs), it suffers from a robustness-accuracy trade-off, where high robustness is achieved at the cost of clean accuracy.

Knowledge Distillation Representation Learning +1

Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection

no code implementations27 Sep 2023 Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen

To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.

Data Augmentation Human Detection

Beyond In-Domain Scenarios: Robust Density-Aware Calibration

1 code implementation10 Feb 2023 Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers

While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios.

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

no code implementations29 Dec 2021 Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen

Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.

Cannot find the paper you are looking for? You can Submit a new open access paper.