Search Results for author: Yu-Lin Tsai

Found 9 papers, 2 papers with code

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

no code implementations27 May 2024 Chia-Yi Hsu, Yu-Lin Tsai, Chih-Hsun Lin, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

Therefore, parameter-efficient fine-tuning such as LoRA have emerged, allowing users to fine-tune LLMs without the need for considerable computing resources, with little performance degradation compared to fine-tuning all parameters.

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

1 code implementation16 Oct 2023 Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.

Exploring the Benefits of Visual Prompting in Differential Privacy

1 code implementation ICCV 2023 Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen

Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.

Image Classification Transfer Learning +1

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

no code implementations2 Nov 2022 Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo

Recently, quantum classifiers have been found to be vulnerable to adversarial attacks, in which quantum classifiers are deceived by imperceptible noises, leading to misclassification.

Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations

no code implementations NeurIPS 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Adversarial Robustness Model Compression

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

no code implementations3 Mar 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Model Compression

Non-Singular Adversarial Robustness of Neural Networks

no code implementations23 Feb 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.