Search Results for author: Hanbin Hong

Found 8 papers, 3 papers with code

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

1 code implementation10 Jun 2024 Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong

Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering.

Backdoor Attack Code Completion +1

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

no code implementations25 May 2024 Jieren Deng, Hanbin Hong, Aaron Palmer, Xin Zhou, Jinbo Bi, Kaleel Mahmood, Yuan Hong, Derek Aguiar

Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations.

Adversarial Robustness Data Augmentation

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

no code implementations31 Jul 2023 Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren

The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.

text-classification Text Classification

Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence

1 code implementation10 Apr 2023 Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong

Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).

Benchmarking speech-recognition +1

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

no code implementations5 Jul 2022 Hanbin Hong, Binghui Wang, Yuan Hong

We study certified robustness of machine learning classifiers against adversarial perturbations.

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

no code implementations2 Feb 2022 Hanbin Hong, Yuan Hong, Yu Kong

In this paper, we show that the gradients can also be exploited as a powerful weapon to defend against adversarial attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.