Search Results for author: Junyuan Hong

Found 24 papers, 15 papers with code

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

no code implementations18 Mar 2024 Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li

While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.

Ethics Fairness +1

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

1 code implementation14 Mar 2024 Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang

While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information.

Inference Attack Membership Inference Attack

On the Generalization Ability of Unsupervised Pretraining

no code implementations11 Mar 2024 Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi

Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.

Binary Classification Unsupervised Pre-training

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

1 code implementation18 Feb 2024 Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen

In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has become standard.

Benchmarking

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer

1 code implementation27 Nov 2023 Junyuan Hong, Jiachen T. Wang, Chenhui Zhang, Zhangheng Li, Bo Li, Zhangyang Wang

To ensure that the prompts do not leak private information, we introduce the first private prompt generation mechanism, by a differentially-private (DP) ensemble of in-context learning with private demonstrations.

In-Context Learning Language Modelling +3

Understanding Deep Gradient Leakage via Inversion Influence Functions

1 code implementation NeurIPS 2023 Haobo Zhang, Junyuan Hong, Yuyang Deng, Mehrdad Mahdavi, Jiayu Zhou

Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.

Safe and Robust Watermark Injection with a Single OoD Image

1 code implementation4 Sep 2023 Shuyang Yu, Junyuan Hong, Haobo Zhang, Haotao Wang, Zhangyang Wang, Jiayu Zhou

Training a high-performance deep neural network requires large amounts of data and computational resources.

Model extraction

FedNoisy: Federated Noisy Label Learning Benchmark

1 code implementation20 Jun 2023 Siqi Liang, Jintao Huang, Junyuan Hong, Dun Zeng, Jiayu Zhou, Zenglin Xu

Federated learning has gained popularity for distributed learning without aggregating sensitive data from clients.

Federated Learning Learning with noisy labels

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

1 code implementation4 Jun 2023 Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou

Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.

Backdoor Defense for Data-Free Distillation with Poisoned Teachers Data-free Knowledge Distillation

On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space

no code implementations23 Feb 2023 Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi, Lingjuan Lyu

Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation.

Binary Classification Domain Generalization +1

A Privacy-Preserving Hybrid Federated Learning Framework for Financial Crime Detection

1 code implementation7 Feb 2023 Haobo Zhang, Junyuan Hong, Fan Dong, Steve Drew, Liangjie Xue, Jiayu Zhou

Developing a mechanism for battling financial crimes is an impending task that requires in-depth collaboration from multiple institutions, and yet such collaboration imposed significant technical challenges due to the privacy and security requirements of distributed financial data.

Federated Learning Privacy Preserving

MECTA: Memory-Economic Continual Test-Time Model Adaptation

2 code implementations ICLR 2023 Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger

The proposed MECTA is efficient and can be seamlessly plugged into state-of-theart CTA algorithms at negligible overhead on computation and memory.

Test-time Adaptation

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

no code implementations23 Oct 2022 Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger

As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device.

Model Compression

Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork

1 code implementation12 Oct 2022 Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, Zhangyang Wang

As a result, both the stem and the classification head in the final network are hardly affected by backdoor training samples.

backdoor defense Classification +1

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

1 code implementation ICLR 2022 Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou

In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.

Personalized Federated Learning

Equalized Robustness: Towards Sustainable Fairness Under Distributional Shifts

no code implementations29 Sep 2021 Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang

In this paper, we first propose a new fairness goal, termed Equalized Robustness (ER), to impose fair model robustness against unseen distribution shifts across majority and minority groups.

Fairness

Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated Learning

1 code implementation18 Jun 2021 Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou

In this paper, we study a novel FL strategy: propagating adversarial robustness from rich-resource users that can afford AT, to those with poor resources that cannot afford it, during federated learning.

Adversarial Robustness Federated Learning

Data-Free Knowledge Distillation for Heterogeneous Federated Learning

4 code implementations20 May 2021 Zhuangdi Zhu, Junyuan Hong, Jiayu Zhou

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.

Data-free Knowledge Distillation Federated Learning +1

Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent

no code implementations19 Jan 2021 Junyuan Hong, Zhangyang Wang, Jiayu Zhou

In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.

On Dynamic Noise Influence in Differential Private Learning

no code implementations1 Jan 2021 Junyuan Hong, Zhangyang Wang, Jiayu Zhou

In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.

Disturbance Grassmann Kernels for Subspace-Based Learning

no code implementations10 Feb 2018 Junyuan Hong, Huanhuan Chen, Feng Lin

In this paper, we focus on subspace-based learning problems, where data elements are linear subspaces instead of vectors.

Cannot find the paper you are looking for? You can Submit a new open access paper.