1 code implementation • 10 Apr 2024 • Wenqian Li, Haozhi Wang, Zhe Huang, Yan Pang
Wasserstein distance is a principle measure of data divergence from a distributional standpoint.
1 code implementation • 20 Feb 2024 • Yan Pang, Yang Zhang, Tianhao Wang
Together with fake video detection and tracing, our multi-faceted set of solutions can effectively mitigate misuse of video generative models.
no code implementations • 30 Dec 2023 • Ran Yan, YuJun Li, Wenqian Li, Peihua Mai, Yan Pang, Yinchuan Li
Large Language Models (LLMs) have proven powerful, but the risk of privacy leakage remains a significant concern.
1 code implementation • 9 Nov 2023 • Wenqian Li, Shuran Fu, Fengrui Zhang, Yan Pang
In scenarios involving numerous data clients within FL, it is often the case that only a subset of clients and datasets are pertinent to a specific learning task, while others might have either a negative or negligible impact on the model training process.
no code implementations • 13 Oct 2023 • Peihua Mai, Ran Yan, Zhe Huang, Youjia Yang, Yan Pang
Large Language Models (LLMs) shows powerful capability in natural language understanding by capturing hidden semantics in vector space.
1 code implementation • 4 Mar 2023 • Wenqian Li, Yinchuan Li, Zhigang Li, Jianye Hao, Yan Pang
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over the years.
2 code implementations • 15 Oct 2022 • Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang
Causal discovery aims to uncover causal structure among a set of variables.
no code implementations • 29 Sep 2022 • Peihua Mai, Yan Pang
Then, the paper proposes a new privacy-preserving framework based on homomorphic encryption, Privacy-Preserving Multi-View Matrix Factorization (PrivMVMF), to enhance user data privacy protection in federated recommender systems.
no code implementations • 4 Jan 2022 • Yan Pang, Chao Liu
To improve functionality, we propose a new transparent network called Graph Decipher to investigate the message-passing mechanism by prioritizing in two main components: the graph structure and node attributes, at the graph, feature, and global levels on a graph under the node classification task.
no code implementations • 4 Jan 2022 • Yan Pang, Chao Liu
Dynamic graph neural networks have been widely used in modeling and representation learning of graph structure data.