1 code implementation • 1 Aug 2024 • Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li
In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees.
no code implementations • 28 Jul 2024 • Yushun Dong, Binchi Zhang, Zhenyu Lei, Na Zou, Jundong Li
Specifically, we first instantiate four types of unlearning requests on graphs, and then we propose an approximation approach to flexibly handle these unlearning requests over diverse GNNs.
no code implementations • 16 Jul 2024 • Yushun Dong, Song Wang, Zhenyu Lei, Zaiyi Zheng, Jing Ma, Chen Chen, Jundong Li
Fairness-aware graph learning has gained increasing attention in recent years.
1 code implementation • 16 Jul 2024 • ZHIXUN LI, Yushun Dong, Qiang Liu, Jeffrey Xu Yu
We claim that the imbalance across different demographic groups is a significant source of unfairness, resulting in imbalanced contributions from each group to the parameters updating.
no code implementations • 2 Jul 2024 • Song Wang, Peng Wang, Tong Zhou, Yushun Dong, Zhen Tan, Jundong Li
To address these limitations, we collect a variety of datasets designed for the bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation Benchmark that covers different types of bias across different social groups and tasks.
1 code implementation • 19 Jun 2024 • Haochen Liu, Song Wang, Yaochen Zhu, Yushun Dong, Jundong Li
In addition, LLMs tend to pick only knowledge with direct semantic relationship with the input text, while potentially useful knowledge with indirect semantics can be ignored.
no code implementations • 17 May 2024 • Song Wang, Yushun Dong, Binchi Zhang, Zihan Chen, Xingbo Fu, Yinhan He, Cong Shen, Chuxu Zhang, Nitesh V. Chawla, Jundong Li
In this survey paper, we explore three critical aspects vital for enhancing safety in Graph ML: reliability, generalizability, and confidentiality.
1 code implementation • 5 Nov 2023 • Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks over the years.
1 code implementation • 20 Oct 2023 • Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e. g., female) in graph-based applications.
1 code implementation • 18 Aug 2023 • Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
By considering embeddings encompassing graph topology and attribute information as reconstruction targets, our model could capture more generalized and comprehensive knowledge.
no code implementations • 2 May 2023 • Xingbo Fu, Chen Chen, Yushun Dong, Anil Vullikanti, Eili Klein, Gregory Madden, Jundong Li
In this paper, we propose a novel problem of antibiogram pattern prediction that aims to predict which patterns will appear in the future.
no code implementations • 2 May 2023 • Yushun Dong, Jundong Li, Tobias Schnabel
In recent years, neural models have been repeatedly touted to exhibit state-of-the-art performance in recommendation.
1 code implementation • 6 Jan 2023 • Song Wang, Yushun Dong, Kaize Ding, Chen Chen, Jundong Li
Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i. e., meta-training classes) and then generalize to classes with limited labeled nodes (i. e., meta-test classes).
1 code implementation • 3 Jan 2023 • Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, Jundong Li
Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i. e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i. e., the teacher GNN model).
1 code implementation • 25 Nov 2022 • Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
no code implementations • 24 Jul 2022 • Xingbo Fu, Binchi Zhang, Yushun Dong, Chen Chen, Jundong Li
Federated Graph Machine Learning (FGML) is a promising solution to tackle this challenge by training graph machine learning models in a federated manner.
1 code implementation • 24 Jun 2022 • Yushun Dong, Song Wang, Yu Wang, Tyler Derr, Jundong Li
The low transparency on how the structure of the input network influences the bias in GNN outcome largely limits the safe adoption of GNNs in various decision-critical scenarios.
1 code implementation • 7 Jun 2022 • Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation.
1 code implementation • 5 May 2022 • Song Wang, Yushun Dong, Xiao Huang, Chen Chen, Jundong Li
Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set.
no code implementations • 24 Apr 2022 • Zheng Huang, Jing Ma, Yushun Dong, Natasha Zhang Foutz, Jundong Li
Noticeably, LBSNs have offered unparalleled access to abundant heterogeneous relational information about users and POIs (including user-user social relations, such as families or colleagues; and user-POI visiting relations).
2 code implementations • 21 Apr 2022 • Yushun Dong, Jing Ma, Song Wang, Chen Chen, Jundong Li
Recently, algorithmic fairness has been extensively studied in graph-based applications.
1 code implementation • 11 Aug 2021 • Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li
We then develop a framework EDITS to mitigate the bias in attributed networks while maintaining the performance of GNNs in downstream tasks.
1 code implementation • 29 May 2021 • Jing Ma, Yushun Dong, Zheng Huang, Daniel Mietchen, Jundong Li
Besides, as the confounders may be time-varying during COVID-19 (e. g., vigilance of residents changes in the course of the pandemic), it is even more difficult to capture them.
1 code implementation • 26 Apr 2021 • Yushun Dong, Kaize Ding, Brian Jalaian, Shuiwang Ji, Jundong Li
Existing efforts can be mainly categorized as spectral-based and spatial-based methods.