no code implementations • NAACL (ACL) 2022 • Shangyu Xie, Yuan Hong
TextHide was recently proposed to protect the training data via instance encoding in natural language domain.
no code implementations • EMNLP 2021 • Shangyu Xie, Yuan Hong
A private learning scheme TextHide was recently proposed to protect the private text data during the training phase via so-called instance encoding.
no code implementations • 18 Dec 2024 • Hanbin Hong, Shenao Yan, Shuya Feng, Yan Yan, Yuan Hong
Active Learning (AL) represents a crucial methodology within machine learning, emphasizing the identification and utilization of the most informative samples for efficient model training.
1 code implementation • 15 Dec 2024 • Binghui Zhang, Sayedeh Leila Noorbakhsh, Yun Dong, Yuan Hong, Binghui Wang
Machine learning models are vulnerable to both security attacks (e. g., adversarial examples) and privacy attacks (e. g., private attribute inference).
no code implementations • 22 Aug 2024 • Zifan Wang, Binghui Zhang, Meng Pang, Yuan Hong, Binghui Wang
Federated learning (FL) is an emerging collaborative learning paradigm that aims to protect data privacy.
1 code implementation • 20 Jul 2024 • Shuya Feng, Meisam Mohammady, Hanbin Hong, Shenao Yan, Ashish Kundu, Binghui Wang, Yuan Hong
DP-SGD) to significantly boost accuracy and convergence.
1 code implementation • 10 Jun 2024 • Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong
Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering.
no code implementations • 29 May 2024 • Qin Yang, Meisam Mohammad, Han Wang, Ali Payani, Ashish Kundu, Kai Shu, Yan Yan, Yuan Hong
To address such limitations, we propose a novel Language Model-based Optimal Differential Privacy (LMO-DP) mechanism, which takes the first step to enable the tight composition of accurately fine-tuning (large) language models with a sub-optimal DP mechanism, even in strong privacy regimes (e. g., $0. 1\leq \epsilon<3$).
no code implementations • 25 May 2024 • Jieren Deng, Hanbin Hong, Aaron Palmer, Xin Zhou, Jinbo Bi, Kaleel Mahmood, Yuan Hong, Derek Aguiar
Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations.
no code implementations • 14 May 2024 • Jie Fu, Yuan Hong, XinPeng Ling, Leixia Wang, Xun Ran, Zhiyu Sun, Wendy Hui Wang, Zhili Chen, Yang Cao
Our work presents a systematic overview of the differentially private federated learning.
no code implementations • CVPR 2024 • Junyi Wu, Weitai Kang, Hao Tang, Yuan Hong, Yan Yan
In contrast, our proposed SaCo offers a reliable faithfulness measurement, establishing a robust metric for interpretations.
1 code implementation • 4 Mar 2024 • Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang
Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.
1 code implementation • 20 Oct 2023 • Xinyu Zhang, Qingyu Liu, Zhongjie Ba, Yuan Hong, Tianhang Zheng, Feng Lin, Li Lu, Kui Ren
In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods.
no code implementations • 31 Jul 2023 • Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren
The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.
1 code implementation • 10 Apr 2023 • Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong
Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).
1 code implementation • 4 Oct 2022 • Xiaochen Li, Yuke Hu, Weiran Liu, Hanwen Feng, Li Peng, Yuan Hong, Kui Ren, Zhan Qin
Although the solution based on Local Differential Privacy (LDP) addresses the above problems, it leads to the low accuracy of the trained model.
no code implementations • 18 Jul 2022 • Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu
Thus, it is essential to ensure fairness in machine learning models.
no code implementations • 12 Jul 2022 • Hanbin Hong, Yuan Hong
However, all of the existing methods rely on fixed i. i. d.
no code implementations • 5 Jul 2022 • Hanbin Hong, Binghui Wang, Yuan Hong
We study certified robustness of machine learning classifiers against adversarial perturbations.
no code implementations • 27 Jun 2022 • Meisam Mohammady, Han Wang, Lingyu Wang, Mengyuan Zhang, Yosr Jarraya, Suryadipta Majumdar, Makan Pourzandi, Mourad Debbabi, Yuan Hong
Outsourcing anomaly detection to third-parties can allow data owners to overcome resource constraints (e. g., in lightweight IoT devices), facilitate collaborative analysis (e. g., under distributed or multi-party scenarios), and benefit from lower costs and specialized expertise (e. g., of Managed Security Service Providers).
no code implementations • 11 Feb 2022 • Feilong Wang, Yuan Hong, Jeff Ban
Accurate and robust localization is crucial for supporting high-level driving automation and safety.
no code implementations • 2 Feb 2022 • Hanbin Hong, Yuan Hong, Yu Kong
In this paper, we show that the gradients can also be exploited as a powerful weapon to defend against adversarial attacks.
no code implementations • 31 May 2021 • Zhikun Liu, Yuanpeng Liu, Yuan Hong, Jinwen Meng, Jianguo Wang, Shusen Zheng, Xiao Xu
LT set contained patients with HCC treated by LT.
no code implementations • 18 Sep 2019 • Han Wang, Shangyu Xie, Yuan Hong
In this paper, to the best of our knowledge, we propose the first differentially private video analytics platform (VideoDP) which flexibly supports different video analyses with rigorous privacy guarantee.
no code implementations • 19 Feb 2019 • Chen Change Loy, Dahua Lin, Wanli Ouyang, Yuanjun Xiong, Shuo Yang, Qingqiu Huang, Dongzhan Zhou, Wei Xia, Quanquan Li, Ping Luo, Junjie Yan, Jian-Feng Wang, Zuoxin Li, Ye Yuan, Boxun Li, Shuai Shao, Gang Yu, Fangyun Wei, Xiang Ming, Dong Chen, Shifeng Zhang, Cheng Chi, Zhen Lei, Stan Z. Li, Hongkai Zhang, Bingpeng Ma, Hong Chang, Shiguang Shan, Xilin Chen, Wu Liu, Boyan Zhou, Huaxiong Li, Peng Cheng, Tao Mei, Artem Kukharenko, Artem Vasenin, Nikolay Sergievskiy, Hua Yang, Liangqi Li, Qiling Xu, Yuan Hong, Lin Chen, Mingjun Sun, Yirong Mao, Shiying Luo, Yongjun Li, Ruiping Wang, Qiaokang Xie, Ziyang Wu, Lei Lu, Yiheng Liu, Wengang Zhou
This paper presents a review of the 2018 WIDER Challenge on Face and Pedestrian.