1 code implementation • 20 Jul 2024 • Hoang Pham, The-Anh Ta, Anh Tran, Khoa D. Doan
Based on this finding, we re-formulate backdoor training through the lens of CL and propose a novel framework, named Sequential Backdoor Learning (SBL), that can generate resilient backdoors.
no code implementations • 17 Jul 2024 • Duy C. Hoang, Hung T. Q. Le, Rui Chu, Ping Li, Weijie Zhao, Yingjie Lao, Khoa D. Doan
To this end, watermarking has been adapted to LLM, enabling a simple and effective way to detect and monitor generated text.
no code implementations • 15 Jul 2024 • Quang H. Nguyen, Nguyen Ngoc-Hieu, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung, Khoa D. Doan
We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate in this setting.
no code implementations • 15 Jul 2024 • Quang H. Nguyen, Duy C. Hoang, Juliette Decugis, Saurav Manchanda, Nitesh V. Chawla, Khoa D. Doan
The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas.
no code implementations • 25 Jun 2024 • Nghia D. Nguyen, Hieu Trung Nguyen, Ang Li, Hoang Pham, Viet Anh Nguyen, Khoa D. Doan
Intrinsic capability to continuously learn a changing data stream is a desideratum of deep neural networks (DNNs).
2 code implementations • 2 Oct 2023 • Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan
Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.
1 code implementation • 1 Oct 2023 • Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
Ranked #1 on Image Classification on delete
no code implementations • 31 Aug 2023 • Sze Jue Yang, Quang Nguyen, Chee Seng Chan, Khoa D. Doan
The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications.
no code implementations • 17 Oct 2022 • Khoa D. Doan, Yingjie Lao, Ping Li
To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.
no code implementations • 9 Oct 2022 • Khoa D. Doan, Jianwen Xie, Yaxuan Zhu, Yang Zhao, Ping Li
Leveraging supervised information can lead to superior retrieval performance in the image hashing domain but the performance degrades significantly without enough labeled data.
no code implementations • 24 Jun 2022 • Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li
We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.
1 code implementation • CVPR 2022 • Khoa D. Doan, Peng Yang, Ping Li
However, in the existing deep supervised hashing methods, coding balance and low-quantization error are difficult to achieve and involve several losses.
no code implementations • 19 Feb 2022 • Minlong Peng, Zidi Xiong, Quang H. Nguyen, Mingming Sun, Khoa D. Doan, Ping Li
In order to achieve a high attack success rate using as few poisoned training samples as possible, most existing attack methods change the labels of the poisoned samples to the target class.
1 code implementation • 26 Mar 2020 • Khoa D. Doan, Saurav Manchanda, Fengjiao Wang, Sathiya Keerthi, Avradeep Bhowmik, Chandan K. Reddy
We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space.
1 code implementation • 29 Feb 2020 • Khoa D. Doan, Saurav Manchanda, Sarkhan Badirli, Chandan K. Reddy
In this paper, we show that the high sample-complexity requirement often results in sub-optimal retrieval performance of the adversarial hashing methods.