no code implementations • 2 Oct 2023 • Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan
Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.
no code implementations • 1 Oct 2023 • Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
no code implementations • 31 Aug 2023 • Sze Jue Yang, Quang Nguyen, Chee Seng Chan, Khoa D. Doan
The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications.
no code implementations • 17 Oct 2022 • Khoa D. Doan, Yingjie Lao, Ping Li
To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.
no code implementations • 9 Oct 2022 • Khoa D. Doan, Jianwen Xie, Yaxuan Zhu, Yang Zhao, Ping Li
Leveraging supervised information can lead to superior retrieval performance in the image hashing domain but the performance degrades significantly without enough labeled data.
no code implementations • 24 Jun 2022 • Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li
We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.
1 code implementation • CVPR 2022 • Khoa D. Doan, Peng Yang, Ping Li
However, in the existing deep supervised hashing methods, coding balance and low-quantization error are difficult to achieve and involve several losses.
1 code implementation • 26 Mar 2020 • Khoa D. Doan, Saurav Manchanda, Fengjiao Wang, Sathiya Keerthi, Avradeep Bhowmik, Chandan K. Reddy
We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space.
1 code implementation • 29 Feb 2020 • Khoa D. Doan, Saurav Manchanda, Sarkhan Badirli, Chandan K. Reddy
In this paper, we show that the high sample-complexity requirement often results in sub-optimal retrieval performance of the adversarial hashing methods.