Search Results for author: Chia-Yi Hsu

Found 10 papers, 4 papers with code

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

1 code implementation16 Oct 2023 Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.

Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations

no code implementations NeurIPS 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Adversarial Robustness Model Compression

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

1 code implementation26 Oct 2021 Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen

We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).

Vertical Federated Learning

Real-World Adversarial Examples involving Makeup Application

no code implementations4 Sep 2021 Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu

The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.

Adversarial Attack Face Recognition +1

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

no code implementations3 Mar 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Model Compression

Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning

1 code implementation2 Mar 2021 Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu

In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.

BIG-bench Machine Learning Contrastive Learning +2

Non-Singular Adversarial Robustness of Neural Networks

no code implementations23 Feb 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.

Adversarial Robustness

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

no code implementations24 Sep 2018 Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.

Cannot find the paper you are looking for? You can Submit a new open access paper.