no code implementations • 29 Mar 2024 • Yun-Yun Tsai, Fu-Chen Chen, Albert Y. C. Chen, Junfeng Yang, Che-Chun Su, Min Sun, Cheng-Hao Kuo
For vision tasks, recent studies have shown that test-time adaptation employing diffusion models can achieve state-of-the-art accuracy improvements on OOD samples by generating new samples that align with the model's domain without the need to modify the model's weights.
no code implementations • 22 Mar 2023 • Yun-Yun Tsai, Ju-Chin Chao, Albert Wen, Zhaoyuan Yang, Chengzhi Mao, Tapan Shah, Junfeng Yang
Test-time defenses solve these issues but most existing test-time defenses require adapting the model weights, therefore they do not work on frozen models and complicate model memory management.
1 code implementation • 16 Jul 2022 • Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho
Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e. g., perturbations bounded in Lp ball.
1 code implementation • CVPR 2023 • Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho
We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$-ball to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation.
no code implementations • ICML Workshop AML 2021 • Yun-Yun Tsai, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho
We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$ norm to composite semantic perturbations, such as Hue, Saturation, Brightness, Contrast, and Rotation.
3 code implementations • 17 Jun 2021 • Chao-Han Huck Yang, Yun-Yun Tsai, Pin-Yu Chen
Learning to classify time series with limited data is a practical yet challenging problem.
no code implementations • ICML 2020 • Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho
Current transfer learning methods are mainly based on finetuning a pretrained model with target-domain data.
BIG-bench Machine Learning Diabetic Retinopathy Detection +1