Search Results for author: Jinghan Jia

Found 14 papers, 10 papers with code

UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models

1 code implementation19 Feb 2024 Yihua Zhang, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Xiaoming Liu, Sijia Liu

The rapid advancement of diffusion models (DMs) has not only transformed various real-world industries but has also introduced negative societal concerns, including the generation of harmful content, copyright disputes, and the rise of stereotypes and biases.

Machine Unlearning Style Transfer

Robust MRI Reconstruction by Smoothed Unrolling (SMUG)

1 code implementation12 Dec 2023 Shijun Liang, Van Hoang Minh Nguyen, Jinghan Jia, Ismail Alkhouri, Sijia Liu, Saiprasad Ravishankar

To address this problem, we propose a novel image reconstruction framework, termed Smoothed Unrolling (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning approach.

Adversarial Defense Image Classification +1

To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now

1 code implementation18 Oct 2023 Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, Sijia Liu

Our results demonstrate the effectiveness and efficiency merits of UnlearnDiffAtk over the state-of-the-art adversarial prompt generation method and reveal the lack of robustness of current safety-driven unlearning techniques when applied to DMs.

Adversarial Robustness Benchmarking +1

DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training

1 code implementation3 Oct 2023 Aochuan Chen, Yimeng Zhang, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu

Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.

Adversarial Defense Computational Efficiency +1

Model Sparsity Can Simplify Machine Unlearning

1 code implementation NeurIPS 2023 Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner, closing the approximation gap, while continuing to be efficient.

Machine Unlearning Transfer Learning

SMUG: Towards robust MRI reconstruction by smoothed unrolling

2 code implementations14 Mar 2023 Hui Li, Jinghan Jia, Shijun Liang, Yuguang Yao, Saiprasad Ravishankar, Sijia Liu

To address this problem, we propose a novel image reconstruction framework, termed SMOOTHED UNROLLING (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning operation.

Adversarial Defense Image Classification +2

Text-Visual Prompting for Efficient 2D Temporal Video Grounding

1 code implementation CVPR 2023 Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding

In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video.

Sentence Video Grounding +1

Robustness-preserving Lifelong Learning via Dataset Condensation

no code implementations7 Mar 2023 Jinghan Jia, Yihua Zhang, Dogyoon Song, Sijia Liu, Alfred Hero

Most work in this learning paradigm has focused on resolving the problem of 'catastrophic forgetting,' which refers to a notorious dilemma between improving model accuracy over new data and retaining accuracy over previous data.

Adversarial Robustness Dataset Condensation +1

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization

1 code implementation19 Dec 2022 Bairu Hou, Jinghan Jia, Yihua Zhang, Guanhua Zhang, Yang Zhang, Sijia Liu, Shiyu Chang

Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP).

Adversarial Defense Adversarial Robustness +1

CLAWSAT: Towards Both Robust and Accurate Code Models

1 code implementation21 Nov 2022 Jinghan Jia, Shashank Srikant, Tamara Mitrovska, Chuang Gan, Shiyu Chang, Sijia Liu, Una-May O'Reilly

We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models.

Code Generation Code Summarization +2

On the Robustness of deep learning-based MRI Reconstruction to image transformations

no code implementations9 Nov 2022 Jinghan Jia, Mingyi Hong, Yimeng Zhang, Mehmet Akçakaya, Sijia Liu

We find a new instability source of MRI image reconstruction, i. e., the lack of reconstruction robustness against spatial transformations of an input, e. g., rotation and cutout.

Image Classification MRI Reconstruction

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

1 code implementation ICLR 2022 Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu

To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.

Adversarial Robustness Image Classification +1

On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations

no code implementations25 Feb 2021 Chi Zhang, Jinghan Jia, Burhaneddin Yaman, Steen Moeller, Sijia Liu, Mingyi Hong, Mehmet Akçakaya

Although deep learning (DL) has received much attention in accelerated MRI, recent studies suggest small perturbations may lead to instabilities in DL-based reconstructions, leading to concern for their clinical application.

MRI Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.