Search Results for author: Chia-Mu Yu

Found 22 papers, 9 papers with code

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models

no code implementations27 Feb 2024 Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen

In the realm of subject-driven text-to-image (T2I) generative models, recent developments like DreamBooth and BLIP-Diffusion have led to impressive results yet encounter limitations due to their intensive fine-tuning demands and substantial parameter requirements.

Image Generation

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

1 code implementation16 Oct 2023 Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.

Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

1 code implementation12 Sep 2023 Xilong Wang, Chia-Mu Yu, Pin-Yu Chen

For machine learning with tabular data, Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy.

Transfer Learning

Exploring the Benefits of Visual Prompting in Differential Privacy

1 code implementation ICCV 2023 Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen

Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.

Image Classification Transfer Learning +1

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

no code implementations2 Nov 2022 Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo

Recently, quantum classifiers have been found to be vulnerable to adversarial attacks, in which quantum classifiers are deceived by imperceptible noises, leading to misclassification.

DPGEN: Differentially Private Generative Energy-Guided Network for Natural Image Synthesis

no code implementations CVPR 2022 Jia-Wei Chen, Chia-Mu Yu, Ching-Chia Kao, Tzai-Wei Pang, Chun-Shien Lu

Despite an increased demand for valuable data, the privacy concerns associated with sensitive datasets present a barrier to data sharing.

Image Generation

Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations

no code implementations NeurIPS 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Adversarial Robustness Model Compression

Meta Adversarial Perturbations

no code implementations AAAI Workshop AdvML 2022 Chia-Hung Yuan, Pin-Yu Chen, Chia-Mu Yu

A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack.

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

1 code implementation26 Oct 2021 Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen

We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).

Vertical Federated Learning

Real-World Adversarial Examples involving Makeup Application

no code implementations4 Sep 2021 Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu

The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.

Adversarial Attack Face Recognition +1

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

no code implementations3 Mar 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Model Compression

Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning

1 code implementation2 Mar 2021 Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu

In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.

BIG-bench Machine Learning Contrastive Learning +2

Non-Singular Adversarial Robustness of Neural Networks

no code implementations23 Feb 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.

Adversarial Robustness

Detecting Deepfake-Forged Contents with Separable Convolutional Neural Network and Image Segmentation

no code implementations21 Dec 2019 Chia-Mu Yu, Ching-Tang Chang, Yen-Wu Ti

Deepfake can result in an erosion of public trust in digital images and videos, which has far-reaching effects on political and social stability.

Face Swapping Image Segmentation +1

Locally Differentially Private Minimum Finding

no code implementations27 May 2019 Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma

Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

no code implementations24 Sep 2018 Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.

On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples

1 code implementation14 Apr 2018 Pei-Hsuan Lu, Pin-Yu Chen, Kang-Cheng Chen, Chia-Mu Yu

In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security.

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples

1 code implementation26 Mar 2018 Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.