no code implementations • 29 Dec 2023 • Xiao-Yang Liu, Rongyi Zhu, Daochen Zha, Jiechao Gao, Shan Zhong, Meikang Qiu
The surge in interest and application of large language models (LLMs) has sparked a drive to fine-tune these models to suit specific applications, such as finance and medical science.
1 code implementation • 4 Aug 2023 • Jiapu Wang, Boyue Wang, Meikang Qiu, Shirui Pan, Bo Xiong, Heng Liu, Linhao Luo, Tengfei Liu, Yongli Hu, BaoCai Yin, Wen Gao
Temporal characteristics are prominently evident in a substantial volume of knowledge, which underscores the pivotal role of Temporal Knowledge Graphs (TKGs) in both academia and industry.
1 code implementation • 1 May 2023 • Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao
Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
no code implementations • 16 Feb 2023 • Zichong Wang, Yang Zhou, Meikang Qiu, Israat Haque, Laura Brown, Yi He, Jianwu Wang, David Lo, Wenbin Zhang
The increasing use of Machine Learning (ML) software can lead to unfair and unethical decisions, thus fairness bugs in software are becoming a growing concern.
2 code implementations • 11 Apr 2022 • Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia
With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.
Ranked #1 on Clean-label Backdoor Attack (0.05%) on Tiny ImageNet
no code implementations • 20 Jan 2022 • Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei L, Meikang Qiu
It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future.
no code implementations • 13 Dec 2020 • Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham
In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.
1 code implementation • 3 Dec 2020 • Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu
With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.
no code implementations • 30 Jul 2020 • Yi Zeng, Han Qiu, Gerard Memmi, Meikang Qiu
Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results.
1 code implementation • 27 May 2020 • Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi
Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.