Search Results for author: Meikang Qiu

Found 10 papers, 5 papers with code

Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning

no code implementations29 Dec 2023 Xiao-Yang Liu, Rongyi Zhu, Daochen Zha, Jiechao Gao, Shan Zhong, Meikang Qiu

The surge in interest and application of large language models (LLMs) has sparked a drive to fine-tune these models to suit specific applications, such as finance and medical science.

Federated Learning Language Modelling +1

A Survey on Temporal Knowledge Graph Completion: Taxonomy, Progress, and Prospects

1 code implementation4 Aug 2023 Jiapu Wang, Boyue Wang, Meikang Qiu, Shirui Pan, Bo Xiong, Heng Liu, Linhao Luo, Tengfei Liu, Yongli Hu, BaoCai Yin, Wen Gao

Temporal characteristics are prominently evident in a substantial volume of knowledge, which underscores the pivotal role of Temporal Knowledge Graphs (TKGs) in both academia and industry.

Missing Elements Temporal Knowledge Graph Completion

Deep Graph Representation Learning and Optimization for Influence Maximization

1 code implementation1 May 2023 Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao

Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.

Graph Representation Learning

Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

no code implementations16 Feb 2023 Zichong Wang, Yang Zhou, Meikang Qiu, Israat Haque, Laura Brown, Yi He, Jianwu Wang, David Lo, Wenbin Zhang

The increasing use of Machine Learning (ML) software can lead to unfair and unethical decisions, thus fairness bugs in software are becoming a growing concern.

Benchmarking counterfactual +1

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

2 code implementations11 Apr 2022 Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia

With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.

Backdoor Attack Clean-label Backdoor Attack (0.024%) +1

Watermarking Pre-trained Encoders in Contrastive Learning

no code implementations20 Jan 2022 Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei L, Meikang Qiu

It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future.

Contrastive Learning

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

no code implementations13 Dec 2020 Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham

In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.

Backdoor Attack Data Augmentation

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

1 code implementation3 Dec 2020 Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu

With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.

Adversarial Attack Data Augmentation

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

no code implementations30 Jul 2020 Yi Zeng, Han Qiu, Gerard Memmi, Meikang Qiu

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results.

Data Augmentation

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

1 code implementation27 May 2020 Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi

Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.

Cannot find the paper you are looking for? You can Submit a new open access paper.