no code implementations • 18 Mar 2024 • Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li
While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.
1 code implementation • 14 Mar 2024 • Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang
While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information.
no code implementations • 11 Mar 2024 • Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi
Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.
1 code implementation • 18 Feb 2024 • Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has become standard.
1 code implementation • 27 Nov 2023 • Junyuan Hong, Jiachen T. Wang, Chenhui Zhang, Zhangheng Li, Bo Li, Zhangyang Wang
To ensure that the prompts do not leak private information, we introduce the first private prompt generation mechanism, by a differentially-private (DP) ensemble of in-context learning with private demonstrations.
1 code implementation • NeurIPS 2023 • Haobo Zhang, Junyuan Hong, Yuyang Deng, Mehrdad Mahdavi, Jiayu Zhou
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
1 code implementation • 4 Sep 2023 • Shuyang Yu, Junyuan Hong, Haobo Zhang, Haotao Wang, Zhangyang Wang, Jiayu Zhou
Training a high-performance deep neural network requires large amounts of data and computational resources.
1 code implementation • 20 Jun 2023 • Siqi Liang, Jintao Huang, Junyuan Hong, Dun Zeng, Jiayu Zhou, Zenglin Xu
Federated learning has gained popularity for distributed learning without aggregating sensitive data from clients.
1 code implementation • 4 Jun 2023 • Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou
Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.
Backdoor Defense for Data-Free Distillation with Poisoned Teachers Data-free Knowledge Distillation
no code implementations • 23 Feb 2023 • Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi, Lingjuan Lyu
Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation.
1 code implementation • 7 Feb 2023 • Haobo Zhang, Junyuan Hong, Fan Dong, Steve Drew, Liangjie Xue, Jiayu Zhou
Developing a mechanism for battling financial crimes is an impending task that requires in-depth collaboration from multiple institutions, and yet such collaboration imposed significant technical challenges due to the privacy and security requirements of distributed financial data.
2 code implementations • ICLR 2023 • Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger
The proposed MECTA is efficient and can be seamlessly plugged into state-of-theart CTA algorithms at negligible overhead on computation and memory.
1 code implementation • ICLR 2023 • Shuyang Yu, Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
We propose to take advantage of such heterogeneity and turn the curse into a blessing that facilitates OoD detection in FL.
no code implementations • 23 Oct 2022 • Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger
As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device.
1 code implementation • 12 Oct 2022 • Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, Zhangyang Wang
As a result, both the stem and the classification head in the final network are hardly affected by backdoor training samples.
no code implementations • 4 Jul 2022 • Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang
Increasing concerns have been raised on deep learning fairness in recent years.
1 code implementation • ICLR 2022 • Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
no code implementations • 29 Sep 2021 • Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang
In this paper, we first propose a new fairness goal, termed Equalized Robustness (ER), to impose fair model robustness against unseen distribution shifts across majority and minority groups.
1 code implementation • the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Junyuan Hong, Zhuangdi Zhu, Shuyang Yu, Zhangyang Wang, Hiroko Dodge, Jiayu Zhou
While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework.
1 code implementation • 18 Jun 2021 • Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
In this paper, we study a novel FL strategy: propagating adversarial robustness from rich-resource users that can afford AT, to those with poor resources that cannot afford it, during federated learning.
4 code implementations • 20 May 2021 • Zhuangdi Zhu, Junyuan Hong, Jiayu Zhou
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.
no code implementations • 19 Jan 2021 • Junyuan Hong, Zhangyang Wang, Jiayu Zhou
In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.
no code implementations • 1 Jan 2021 • Junyuan Hong, Zhangyang Wang, Jiayu Zhou
In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.
no code implementations • 10 Feb 2018 • Junyuan Hong, Huanhuan Chen, Feng Lin
In this paper, we focus on subspace-based learning problems, where data elements are linear subspaces instead of vectors.