Search Results for author: Yuan Hong

Found 25 papers, 7 papers with code

Differentially Private Instance Encoding against Privacy Attacks

no code implementations NAACL (ACL) 2022 Shangyu Xie, Yuan Hong

TextHide was recently proposed to protect the training data via instance encoding in natural language domain.

Reconstruction Attack

Reconstruction Attack on Instance Encoding for Language Understanding

no code implementations EMNLP 2021 Shangyu Xie, Yuan Hong

A private learning scheme TextHide was recently proposed to protect the private text data during the training phase via so-called instance encoding.

Privacy Preserving Reconstruction Attack +2

GALOT: Generative Active Learning via Optimizable Zero-shot Text-to-image Generation

no code implementations18 Dec 2024 Hanbin Hong, Shenao Yan, Shuya Feng, Yan Yan, Yuan Hong

Active Learning (AL) represents a crucial methodology within machine learning, emphasizing the identification and utilization of the most informative samples for efficient model training.

Active Learning Pseudo Label +1

Learning Robust and Privacy-Preserving Representations via Information Theory

1 code implementation15 Dec 2024 Binghui Zhang, Sayedeh Leila Noorbakhsh, Yun Dong, Yuan Hong, Binghui Wang

Machine learning models are vulnerable to both security attacks (e. g., adversarial examples) and privacy attacks (e. g., private attribute inference).

Adversarial Robustness Attribute +2

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

1 code implementation10 Jun 2024 Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong

Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering.

Backdoor Attack Code Completion +1

LMO-DP: Optimizing the Randomization Mechanism for Differentially Private Fine-Tuning (Large) Language Models

no code implementations29 May 2024 Qin Yang, Meisam Mohammad, Han Wang, Ali Payani, Ashish Kundu, Kai Shu, Yan Yan, Yuan Hong

To address such limitations, we propose a novel Language Model-based Optimal Differential Privacy (LMO-DP) mechanism, which takes the first step to enable the tight composition of accurately fine-tuning (large) language models with a sub-optimal DP mechanism, even in strong privacy regimes (e. g., $0. 1\leq \epsilon<3$).

Language Modelling SST-2 +1

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

no code implementations25 May 2024 Jieren Deng, Hanbin Hong, Aaron Palmer, Xin Zhou, Jinbo Bi, Kaleel Mahmood, Yuan Hong, Derek Aguiar

Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations.

Adversarial Robustness Data Augmentation

On the Faithfulness of Vision Transformer Explanations

no code implementations CVPR 2024 Junyi Wu, Weitai Kang, Hao Tang, Yuan Hong, Yan Yan

In contrast, our proposed SaCo offers a reliable faithfulness measurement, establishing a robust metric for interpretations.

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

1 code implementation4 Mar 2024 Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang

Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.

Inference Attack Privacy Preserving +1

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

no code implementations31 Jul 2023 Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren

The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks.

text-classification Text Classification

Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence

1 code implementation10 Apr 2023 Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, Yuan Hong

Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs).

Benchmarking speech-recognition +1

OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization

1 code implementation4 Oct 2022 Xiaochen Li, Yuke Hu, Weiran Liu, Hanwen Feng, Li Peng, Yuan Hong, Kui Ren, Zhan Qin

Although the solution based on Local Differential Privacy (LDP) addresses the above problems, it leads to the low accuracy of the trained model.

Privacy Preserving Vertical Federated Learning

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

no code implementations5 Jul 2022 Hanbin Hong, Binghui Wang, Yuan Hong

We study certified robustness of machine learning classifiers against adversarial perturbations.

DPOAD: Differentially Private Outsourcing of Anomaly Detection through Iterative Sensitivity Learning

no code implementations27 Jun 2022 Meisam Mohammady, Han Wang, Lingyu Wang, Mengyuan Zhang, Yosr Jarraya, Suryadipta Majumdar, Makan Pourzandi, Mourad Debbabi, Yuan Hong

Outsourcing anomaly detection to third-parties can allow data owners to overcome resource constraints (e. g., in lightweight IoT devices), facilitate collaborative analysis (e. g., under distributed or multi-party scenarios), and benefit from lower costs and specialized expertise (e. g., of Managed Security Service Providers).

Anomaly Detection

Infrastructure-enabled GPS Spoofing Detection and Correction

no code implementations11 Feb 2022 Feilong Wang, Yuan Hong, Jeff Ban

Accurate and robust localization is crucial for supporting high-level driving automation and safety.

Autonomous Driving

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

no code implementations2 Feb 2022 Hanbin Hong, Yuan Hong, Yu Kong

In this paper, we show that the gradients can also be exploited as a powerful weapon to defend against adversarial attacks.

VideoDP: A Universal Platform for Video Analytics with Differential Privacy

no code implementations18 Sep 2019 Han Wang, Shangyu Xie, Yuan Hong

In this paper, to the best of our knowledge, we propose the first differentially private video analytics platform (VideoDP) which flexibly supports different video analyses with rigorous privacy guarantee.

Cannot find the paper you are looking for? You can Submit a new open access paper.