Search Results for author: Jinghui Chen

Found 31 papers, 14 papers with code

Padam: Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks

no code implementations ICLR 2019 Jinghui Chen, Quanquan Gu

Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models

no code implementations16 Feb 2024 Ziyi Yin, Muchao Ye, Tianrong Zhang, Jiaqi Wang, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

Correspondingly, we propose a novel VQAttack model, which can iteratively generate both image and text perturbations with the designed modules: the large language model (LLM)-enhanced image attack and the cross-modal joint attack module.

Adversarial Robustness Language Modelling +3

Federated Learning with Projected Trajectory Regularization

no code implementations22 Dec 2023 Tiejin Chen, Yuanpu Cao, Yujia Wang, Cho-Jui Hsieh, Jinghui Chen

Specifically, FedPTR allows local clients or the server to optimize an auxiliary (synthetic) dataset that mimics the learning dynamics of the recent model update and utilizes it to project the next-step model trajectory for local training regularization.

Federated Learning

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

no code implementations14 Dec 2023 Changjiang Li, Ren Pang, Bochuan Cao, Zhaohan Xi, Jinghui Chen, Shouling Ji, Ting Wang

Recent studies have shown that contrastive learning, like supervised learning, is highly vulnerable to backdoor attacks wherein malicious functions are injected into target models, only to be activated by specific triggers.

Contrastive Learning

Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections

no code implementations15 Nov 2023 Yuanpu Cao, Bochuan Cao, Jinghui Chen

In this work, we show that it is possible to conduct stealthy and persistent unalignment on large language models via backdoor injections.

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

1 code implementation NeurIPS 2023 Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, Jinghui Chen

IMPRESS is based on the key observation that imperceptible perturbations could lead to a perceptible inconsistency between the original image and the diffusion-reconstructed image, which can be used to devise a new optimization strategy for purifying the image, which may weaken the protection of the original image from unauthorized data usage (e. g., style mimicking, malicious editing).

Image Generation

VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models

1 code implementation NeurIPS 2023 Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks.

Adversarial Robustness

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

no code implementations2 Oct 2023 Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu

A natural question is "could alignment really prevent those open-sourced large language models from being misused to generate undesired content?''.

Text Generation

Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM

1 code implementation18 Sep 2023 Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen

In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks.

On the Vulnerability of Backdoor Defenses for Federated Learning

1 code implementation19 Jan 2023 Pei Fang, Jinghui Chen

Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.

Backdoor Attack Federated Learning

Spectral Augmentation for Self-Supervised Learning on Graphs

1 code implementation2 Oct 2022 Lu Lin, Jinghui Chen, Hongning Wang

Graph contrastive learning (GCL), as an emerging self-supervised learning technique on graphs, aims to learn representations via instance discrimination.

Contrastive Learning Node Classification +3

One-shot Neural Backdoor Erasing via Adversarial Weight Masking

no code implementations10 Jul 2022 Shuwen Chai, Jinghui Chen

Recent studies show that despite achieving high accuracy on a number of real-world applications, deep neural networks (DNNs) can be backdoored: by injecting triggered data samples into the training dataset, the adversary can mislead the trained model into classifying any test data to the target class as long as the trigger pattern is presented.

RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model

1 code implementation1 Jun 2022 Hangzhi Guo, Feiran Jia, Jinghui Chen, Anna Squicciarini, Amulya Yadav

To address this problem, we propose RoCourseNet, a training framework that jointly optimizes predictions and recourses that are robust to future data shifts.

counterfactual

Communication-Efficient Adaptive Federated Learning

1 code implementation5 May 2022 Yujia Wang, Lu Lin, Jinghui Chen

We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of $O(\frac{1}{\sqrt{TKm}})$ as its non-compressed counterparts.

Federated Learning Quantization +1

Do Language Models Plagiarize?

1 code implementation15 Mar 2022 Jooyoung Lee, Thai Le, Jinghui Chen, Dongwon Lee

Our results suggest that (1) three types of plagiarism widely exist in LMs beyond memorization, (2) both size and decoding methods of LMs are strongly associated with the degrees of plagiarism they exhibit, and (3) fine-tuned LMs' plagiarism patterns vary based on their corpus similarity and homogeneity.

Language Modelling Memorization +1

Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations

no code implementations ICLR 2022 Weiqi Peng, Jinghui Chen

In particular, we propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to slightly modify data samples so that they become "unlearnable" by machine learning models with negligible loss of visual features.

Benign Overfitting in Adversarially Robust Linear Classification

no code implementations31 Dec 2021 Jinghui Chen, Yuan Cao, Quanquan Gu

Our result suggests that under moderate perturbations, adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks, despite overfitting the noisy training data.

Classification

Communication-Compressed Adaptive Gradient Method for Distributed Nonconvex Optimization

no code implementations1 Nov 2021 Yujia Wang, Lu Lin, Jinghui Chen

We prove that the proposed communication-efficient distributed adaptive gradient method converges to the first-order stationary point with the same iteration complexity as uncompressed vanilla AMSGrad in the stochastic nonconvex optimization setting.

Do Wider Neural Networks Really Help Adversarial Robustness?

1 code implementation NeurIPS 2021 Boxi Wu, Jinghui Chen, Deng Cai, Xiaofei He, Quanquan Gu

Previous empirical results suggest that adversarial training requires wider networks for better performances.

Adversarial Robustness

Efficient Robust Training via Backward Smoothing

1 code implementation3 Oct 2020 Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, Jingjing Liu

In this work, we develop a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better optimization of the inner maximization problem.

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

1 code implementation1 Mar 2020 Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans

Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space.

Adversarial Robustness

Training Deep Neural Networks with Partially Adaptive Momentum

no code implementations25 Sep 2019 Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, Quanquan Gu

Experiments on standard benchmarks show that our proposed algorithm can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

2 code implementations ICLR 2019 Jinghui Chen, Dongruo Zhou, Jin-Feng Yi, Quanquan Gu

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.

Adversarial Attack

On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization

no code implementations16 Aug 2018 Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, Quanquan Gu

In this paper, we provide a fine-grained convergence analysis for a general class of adaptive gradient methods including AMSGrad, RMSProp and AdaGrad.

Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization

no code implementations ICML 2018 Jinghui Chen, Pan Xu, Lingxiao Wang, Jian Ma, Quanquan Gu

We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints.

Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks

2 code implementations18 Jun 2018 Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, Quanquan Gu

Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

no code implementations NeurIPS 2018 Pan Xu, Jinghui Chen, Difan Zou, Quanquan Gu

Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within $\tilde O\big(\sqrt{n}d^5/(\lambda^4\epsilon^{5/2})\big)$ stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime.

Robust Wirtinger Flow for Phase Retrieval with Arbitrary Corruption

no code implementations20 Apr 2017 Jinghui Chen, Lingxiao Wang, Xiao Zhang, Quanquan Gu

We consider the robust phase retrieval problem of recovering the unknown signal from the magnitude-only measurements, where the measurements can be contaminated by both sparse arbitrary corruption and bounded random noise.

Retrieval

High Dimensional Multivariate Regression and Precision Matrix Estimation via Nonconvex Optimization

no code implementations2 Jun 2016 Jinghui Chen, Quanquan Gu

We propose a nonconvex estimator for joint multivariate regression and precision matrix estimation in the high dimensional regime, under sparsity constraints.

regression Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.