Search Results for author: Liming Fang

Found 9 papers, 3 papers with code

SSL-Auth: An Authentication Framework by Fragile Watermarking for Pre-trained Encoders in Self-supervised Learning

no code implementations9 Aug 2023 Xiaobei Li, Changchun Yin, Liyue Zhu, Xiaogang Xu, Liming Fang, Run Wang, Chenhao Lin

Self-supervised learning (SSL), a paradigm harnessing unlabeled datasets to train robust encoders, has recently witnessed substantial success.

Self-Supervised Learning

Hard Adversarial Example Mining for Improving Robust Fairness

no code implementations3 Aug 2023 Chenhao Lin, Xiang Ji, Yulong Yang, Qian Li, Chao Shen, Run Wang, Liming Fang

Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AE).

Fairness

Efficient and Low Overhead Website Fingerprinting Attacks and Defenses based on TCP/IP Traffic

no code implementations27 Feb 2023 Guodong Huang, Chuan Ma, Ming Ding, Yuwen Qian, Chunpeng Ge, Liming Fang, Zhe Liu

To achieve a configurable trade-off between the defense and the network overhead, we further improve the list-based defense by a traffic splitting mechanism, which can combat the mentioned attacks as well as save a considerable amount of network overhead.

Website Fingerprinting Attacks

Cluster-guided Contrastive Graph Clustering Network

1 code implementation3 Jan 2023 Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, En Zhu

Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views.

Clustering Contrastive Learning +1

FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning

no code implementations16 Apr 2021 Bo Zhao, Peng Sun, Liming Fang, Tao Wang, Keyu Jiang

The results demonstrate its effectiveness and superior performance compared to the state-of-the-art Byzantine-robust schemes in defending against typical data poisoning and model poisoning attacks under practical Non-IID data distributions.

Data Poisoning Federated Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.