Search Results for author: Yihua Zhang

Found 28 papers, 21 papers with code

Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond

no code implementations7 Feb 2025 Chongyu Fan, Jinghan Jia, Yihua Zhang, Anil Ramakrishna, Mingyi Hong, Sijia Liu

For the first time, we establish a connection between robust unlearning and sharpness-aware minimization (SAM) through a unified robust optimization framework, in an analogy to adversarial training designed to defend against adversarial attacks.

Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification

1 code implementation21 Dec 2024 Changchang Sun, Ren Wang, Yihua Zhang, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Sijia Liu, Yan Yan

Machine unlearning (MU), which seeks to erase the influence of specific unwanted data from already-trained models, is becoming increasingly vital in model editing, particularly to comply with evolving data regulations like the ``right to be forgotten''.

Image Classification Machine Unlearning +1

UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS

no code implementations27 Nov 2024 Haomin Zhuang, Yihua Zhang, Kehan Guo, Jinghan Jia, Gaowen Liu, Sijia Liu, Xiangliang Zhang

As MoE LLMs are celebrated for their exceptional performance and highly efficient inference processes, we ask: How can unlearning be performed effectively and efficiently on MoE LLMs?

Large Language Model

Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing

1 code implementation25 Nov 2024 Hanhui Wang, Yihua Zhang, Ruizheng Bai, Yue Zhao, Sijia Liu, Zhengzhong Tu

Recent advancements in diffusion models have made generative image editing more accessible, enabling creative edits but raising ethical concerns, particularly regarding malicious edits to human portraits that threaten privacy and identity security.

Privacy Preserving

WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models

1 code implementation23 Oct 2024 Jinghan Jia, Jiancheng Liu, Yihua Zhang, Parikshit Ram, Nathalie Baracaldo, Sijia Liu

The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices.

Pruning then Reweighting: Towards Data-Efficient Training of Diffusion Models

1 code implementation27 Sep 2024 Yize Li, Yihua Zhang, Sijia Liu, Xue Lin

Despite the remarkable generation capabilities of Diffusion Models (DMs), conducting training and inference remains computationally expensive.

Image Generation

SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

1 code implementation28 Apr 2024 Jinghan Jia, Yihua Zhang, Yimeng Zhang, Jiancheng Liu, Bharat Runwal, James Diffenderfer, Bhavya Kailkhura, Sijia Liu

In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal).

Stochastic Optimization

The Power of Few: Accelerating and Enhancing Data Reweighting with Coreset Selection

no code implementations18 Mar 2024 Mohammad Jafari, Yimeng Zhang, Yihua Zhang, Sijia Liu

As machine learning tasks continue to evolve, the trend has been to gather larger datasets and train increasingly larger models.

Computational Efficiency

UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models

1 code implementation19 Feb 2024 Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Rao Kompella, Xiaoming Liu, Sijia Liu

The technological advancements in diffusion models (DMs) have demonstrated unprecedented capabilities in text-to-image generation and are widely used in diverse applications.

Machine Unlearning Style Transfer +1

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

1 code implementation18 Feb 2024 Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen

In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has become standard.

Benchmarking

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective

1 code implementation3 Dec 2023 Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen

The rapid development of large-scale deep learning models questions the affordability of hardware platforms, which necessitates the pruning to reduce their computational and memory footprints.

Image Classification Visual Prompting

SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation

1 code implementation19 Oct 2023 Chongyu Fan, Jiancheng Liu, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu

To address these challenges, we introduce the concept of 'weight saliency' for MU, drawing parallels with input saliency in model explanation.

Image Classification Image Generation +1

To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now

1 code implementation18 Oct 2023 Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, Sijia Liu

Specifically, we investigated the adversarial robustness of DMs, assessed by adversarial prompts, when eliminating unwanted concepts, styles, and objects.

Adversarial Robustness Benchmarking +1

DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training

1 code implementation3 Oct 2023 Aochuan Chen, Yimeng Zhang, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu

Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.

Adversarial Defense Computational Efficiency +1

Robust Mixture-of-Expert Training for Convolutional Neural Networks

1 code implementation ICCV 2023 Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, huan zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu

Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model?

Adversarial Robustness

An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning

no code implementations1 Aug 2023 Yihua Zhang, Prashant Khanduri, Ioannis Tsaknakis, Yuguang Yao, Mingyi Hong, Sijia Liu

Overall, we hope that this article can serve to accelerate the adoption of BLO as a generic tool to model, analyze, and innovate on a wide array of emerging SP and ML applications.

A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion

1 code implementation29 Mar 2023 Haomin Zhuang, Yihua Zhang, Sijia Liu

In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the absence of end-to-end model queries.

Adversarial Robustness Adversarial Text

Robustness-preserving Lifelong Learning via Dataset Condensation

no code implementations7 Mar 2023 Jinghan Jia, Yihua Zhang, Dogyoon Song, Sijia Liu, Alfred Hero

Most work in this learning paradigm has focused on resolving the problem of 'catastrophic forgetting,' which refers to a notorious dilemma between improving model accuracy over new data and retaining accuracy over previous data.

Adversarial Robustness Dataset Condensation +1

What Is Missing in IRM Training and Evaluation? Challenges and Solutions

no code implementations4 Mar 2023 Yihua Zhang, Pranay Sharma, Parikshit Ram, Mingyi Hong, Kush Varshney, Sijia Liu

We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bi-level optimization.

Out-of-Distribution Generalization

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization

1 code implementation19 Dec 2022 Bairu Hou, Jinghan Jia, Yihua Zhang, Guanhua Zhang, Yang Zhang, Sijia Liu, Shiyu Chang

Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP).

Adversarial Defense Adversarial Robustness +1

Understanding and Improving Visual Prompting: A Label-Mapping Perspective

1 code implementation CVPR 2023 Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, Sijia Liu

As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, our method outperforms baselines by a substantial margin, e. g., 7. 9% and 6. 7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets.

Transfer Learning Visual Prompting

Advancing Model Pruning via Bi-level Optimization

1 code implementation8 Oct 2022 Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu

To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.

model

Fairness Reprogramming

1 code implementation21 Sep 2022 Guanhua Zhang, Yihua Zhang, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, Shiyu Chang

Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger, which is tuned towards the fairness criteria under a min-max formulation.

Fairness

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

2 code implementations13 Jun 2022 Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu

Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.

Distributed Optimization

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

1 code implementation CVPR 2022 Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang

Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger.

Network Pruning

Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization

2 code implementations23 Dec 2021 Yihua Zhang, Guanhua Zhang, Prashant Khanduri, Mingyi Hong, Shiyu Chang, Sijia Liu

We first show that the commonly-used Fast-AT is equivalent to using a stochastic gradient algorithm to solve a linearized BLO problem involving a sign operation.

Adversarial Defense

Cannot find the paper you are looking for? You can Submit a new open access paper.