Search Results for author: Khoa D. Doan

Found 15 papers, 6 papers with code

Flatness-aware Sequential Learning Generates Resilient Backdoors

1 code implementation20 Jul 2024 Hoang Pham, The-Anh Ta, Anh Tran, Khoa D. Doan

Based on this finding, we re-formulate backdoor training through the lens of CL and propose a novel framework, named Sequential Backdoor Learning (SBL), that can generate resilient backdoors.

Continual Learning

Less is More: Sparse Watermarking in LLMs with Enhanced Text Quality

no code implementations17 Jul 2024 Duy C. Hoang, Hung T. Q. Le, Rui Chu, Ping Li, Weijie Zhao, Yingjie Lao, Khoa D. Doan

To this end, watermarking has been adapted to LLM, enabling a simple and effective way to detect and monitor generated text.

POS

Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks

no code implementations15 Jul 2024 Quang H. Nguyen, Nguyen Ngoc-Hieu, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung, Khoa D. Doan

We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate in this setting.

Adversarial Attack Face Recognition

MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs

no code implementations15 Jul 2024 Quang H. Nguyen, Duy C. Hoang, Juliette Decugis, Saurav Manchanda, Nitesh V. Chawla, Khoa D. Doan

The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas.

Fooling the Textual Fooler via Randomizing Latent Representations

2 code implementations2 Oct 2023 Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan

Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.

Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks

1 code implementation1 Oct 2023 Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan

Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.

Image Classification

Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack

no code implementations31 Aug 2023 Sze Jue Yang, Quang Nguyen, Chee Seng Chan, Khoa D. Doan

The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications.

Backdoor Attack Image Compression

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

no code implementations17 Oct 2022 Khoa D. Doan, Yingjie Lao, Ping Li

To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.

Backdoor Attack

CoopHash: Cooperative Learning of Multipurpose Descriptor and Contrastive Pair Generator via Variational MCMC Teaching for Supervised Image Hashing

no code implementations9 Oct 2022 Khoa D. Doan, Jianwen Xie, Yaxuan Zhu, Yang Zhao, Ping Li

Leveraging supervised information can lead to superior retrieval performance in the image hashing domain but the performance degrades significantly without enough labeled data.

Retrieval

Defending Backdoor Attacks on Vision Transformer via Patch Processing

no code implementations24 Jun 2022 Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li

We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.

Backdoor Attack Inductive Bias

One Loss for Quantization: Deep Hashing with Discrete Wasserstein Distributional Matching

1 code implementation CVPR 2022 Khoa D. Doan, Peng Yang, Ping Li

However, in the existing deep supervised hashing methods, coding balance and low-quantization error are difficult to achieve and involve several losses.

Deep Hashing Quantization +1

Under-confidence Backdoors Are Resilient and Stealthy Backdoors

no code implementations19 Feb 2022 Minlong Peng, Zidi Xiong, Quang H. Nguyen, Mingming Sun, Khoa D. Doan, Ping Li

In order to achieve a high attack success rate using as few poisoned training samples as possible, most existing attack methods change the labels of the poisoned samples to the target class.

Backdoor Attack

Image Generation Via Minimizing Fréchet Distance in Discriminator Feature Space

1 code implementation26 Mar 2020 Khoa D. Doan, Saurav Manchanda, Fengjiao Wang, Sathiya Keerthi, Avradeep Bhowmik, Chandan K. Reddy

We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space.

Image Generation

Image Hashing by Minimizing Discrete Component-wise Wasserstein Distance

1 code implementation29 Feb 2020 Khoa D. Doan, Saurav Manchanda, Sarkhan Badirli, Chandan K. Reddy

In this paper, we show that the high sample-complexity requirement often results in sub-optimal retrieval performance of the adversarial hashing methods.

Image Retrieval Quantization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.