1 code implementation • 9 Mar 2024 • Xiaowei Qian, Zhimeng Guo, Jialiang Li, Haitao Mao, Bingheng Li, Suhang Wang, Yao Ma
These datasets are thoughtfully designed to include relevant graph structures and bias information crucial for the fair evaluation of models.
no code implementations • 2 Oct 2023 • Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu
A natural question is "could alignment really prevent those open-sourced large language models from being misused to generate undesired content?''.
1 code implementation • 10 Jul 2023 • Zhimeng Guo, Jialiang Li, Teng Xiao, Yao Ma, Suhang Wang
Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
no code implementations • 19 Jun 2023 • Huaisheng Zhu, Guoji Fu, Zhimeng Guo, Zhiwei Zhang, Teng Xiao, Suhang Wang
Graph Neural Networks (GNNs) have shown great power in various domains.
1 code implementation • 3 Apr 2023 • Zhimeng Guo, Teng Xiao, Zongyu Wu, Charu Aggarwal, Hui Liu, Suhang Wang
To facilitate the development of this promising direction, in this survey, we categorize and comprehensively review papers on graph counterfactual learning.
no code implementations • 3 Aug 2022 • Shijie Zhou, Zhimeng Guo, Charu Aggarwal, Xiang Zhang, Suhang Wang
Therefore, in this paper, we study a novel problem of exploring disentangled representation learning for link prediction on heterophilic graphs.
no code implementations • 7 Jun 2022 • Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang
This paper studies the problem of conducting self-supervised learning for node representation learning on graphs.
no code implementations • 18 Apr 2022 • Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, Suhang Wang
Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society.
1 code implementation • 15 Oct 2021 • Enyan Dai, Shijie Zhou, Zhimeng Guo, Suhang Wang
Graph Neural Networks (GNNs) have achieved remarkable performance in modeling graphs for various applications.
Ranked #1 on
Node Classification
on Crocodile