1 code implementation • 20 Jul 2024 • Shuya Feng, Meisam Mohammady, Hanbin Hong, Shenao Yan, Ashish Kundu, Binghui Wang, Yuan Hong
DP-SGD) to significantly boost accuracy and convergence.
1 code implementation • 10 Jun 2024 • Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong
Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering.
no code implementations • 25 May 2024 • Jieren Deng, Hanbin Hong, Aaron Palmer, Xin Zhou, Jinbo Bi, Kaleel Mahmood, Yuan Hong, Derek Aguiar
Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations.
no code implementations • 31 Jul 2023 • Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren
The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.
1 code implementation • 10 Apr 2023 • Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong
Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).
no code implementations • 12 Jul 2022 • Hanbin Hong, Yuan Hong
However, all of the existing methods rely on fixed i. i. d.
no code implementations • 5 Jul 2022 • Hanbin Hong, Binghui Wang, Yuan Hong
We study certified robustness of machine learning classifiers against adversarial perturbations.
no code implementations • 2 Feb 2022 • Hanbin Hong, Yuan Hong, Yu Kong
In this paper, we show that the gradients can also be exploited as a powerful weapon to defend against adversarial attacks.