no code implementations • 22 Mar 2023 • Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen
Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.
no code implementations • 2 Nov 2022 • Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo
Recently, quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification.
no code implementations • CVPR 2022 • Jia-Wei Chen, Chia-Mu Yu, Ching-Chia Kao, Tzai-Wei Pang, Chun-Shien Lu
Despite an increased demand for valuable data, the privacy concerns associated with sensitive datasets present a barrier to data sharing.
no code implementations • NeurIPS 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • NeurIPS 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
no code implementations • AAAI Workshop AdvML 2022 • Chia-Hung Yuan, Pin-Yu Chen, Chia-Mu Yu
A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack.
1 code implementation • 26 Oct 2021 • Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).
no code implementations • 4 Sep 2021 • Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu
The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.
1 code implementation • CVPR 2021 • Jia-Wei Chen, Li-Ju Chen, Chia-Mu Yu, Chun-Shien Lu
However, the sensitive information in the datasets discourages data owners from releasing these datasets.
no code implementations • 3 Mar 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.
1 code implementation • 2 Mar 2021 • Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu
In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
no code implementations • 23 Feb 2021 • Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
no code implementations • 21 Dec 2019 • Chia-Mu Yu, Ching-Tang Chang, Yen-Wu Ti
Deepfake can result in an erosion of public trust in digital images and videos, which has far-reaching effects on political and social stability.
no code implementations • 27 May 2019 • Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma
Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.
no code implementations • 24 Sep 2018 • Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu
Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.
1 code implementation • 14 Apr 2018 • Pei-Hsuan Lu, Pin-Yu Chen, Kang-Cheng Chen, Chia-Mu Yu
In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security.
1 code implementation • 26 Mar 2018 • Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu
Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations.