Search Results for author: Khoa D. Doan

Found 9 papers, 3 papers with code

Fooling the Textual Fooler via Randomizing Latent Representations

no code implementations2 Oct 2023 Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan

Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.

Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks

no code implementations1 Oct 2023 Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan

Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.

Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack

no code implementations31 Aug 2023 Sze Jue Yang, Quang Nguyen, Chee Seng Chan, Khoa D. Doan

The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications.

Backdoor Attack Image Compression

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

no code implementations17 Oct 2022 Khoa D. Doan, Yingjie Lao, Ping Li

To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.

Backdoor Attack

CoopHash: Cooperative Learning of Multipurpose Descriptor and Contrastive Pair Generator via Variational MCMC Teaching for Supervised Image Hashing

no code implementations9 Oct 2022 Khoa D. Doan, Jianwen Xie, Yaxuan Zhu, Yang Zhao, Ping Li

Leveraging supervised information can lead to superior retrieval performance in the image hashing domain but the performance degrades significantly without enough labeled data.

Retrieval

Defending Backdoor Attacks on Vision Transformer via Patch Processing

no code implementations24 Jun 2022 Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li

We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.

Backdoor Attack Inductive Bias

One Loss for Quantization: Deep Hashing with Discrete Wasserstein Distributional Matching

1 code implementation CVPR 2022 Khoa D. Doan, Peng Yang, Ping Li

However, in the existing deep supervised hashing methods, coding balance and low-quantization error are difficult to achieve and involve several losses.

Deep Hashing Quantization +1

Image Generation Via Minimizing Fréchet Distance in Discriminator Feature Space

1 code implementation26 Mar 2020 Khoa D. Doan, Saurav Manchanda, Fengjiao Wang, Sathiya Keerthi, Avradeep Bhowmik, Chandan K. Reddy

We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space.

Image Generation

Image Hashing by Minimizing Discrete Component-wise Wasserstein Distance

1 code implementation29 Feb 2020 Khoa D. Doan, Saurav Manchanda, Sarkhan Badirli, Chandan K. Reddy

In this paper, we show that the high sample-complexity requirement often results in sub-optimal retrieval performance of the adversarial hashing methods.

Image Retrieval Quantization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.