no code implementations • 28 Aug 2023 • Song Wang, Jing Ma, Lu Cheng, Jundong Li
These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks.
1 code implementation • 18 Aug 2023 • Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
By considering embeddings encompassing graph topology and attribute information as reconstruction targets, our model could capture more generalized and comprehensive knowledge.
1 code implementation • 22 Jul 2023 • Qiaoyu Tan, Xin Zhang, Xiao Huang, Hao Chen, Jundong Li, Xia Hu
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
no code implementations • 17 Jul 2023 • Jing Ma, Chen Chen, Anil Vullikanti, Ritwick Mishra, Gregory Madden, Daniel Borrajo, Jundong Li
In this paper, we study the problem of causal effect estimation with treatment entangled in a graph.
no code implementations • 17 Jul 2023 • Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
1 code implementation • 27 Jun 2023 • Song Wang, Zhen Tan, Huan Liu, Jundong Li
First, we propose to enhance the intra-class generalizability by involving a contrastive two-step optimization in each episode to explicitly align node embeddings in the same classes.
1 code implementation • 17 Jun 2023 • Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, Jundong Li
In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients.
1 code implementation • 5 Jun 2023 • Yaochen Zhu, Jing Ma, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li
But since sensitive features may also affect user interests in a fair manner (e. g., race on culture-based preferences), indiscriminately eliminating all the influences of sensitive features inevitably degenerate the recommendations quality and necessary diversities.
no code implementations • 2 May 2023 • Xingbo Fu, Chen Chen, Yushun Dong, Anil Vullikanti, Eili Klein, Gregory Madden, Jundong Li
In this paper, we propose a novel problem of antibiogram pattern prediction that aims to predict which patterns will appear in the future.
no code implementations • 2 May 2023 • Yushun Dong, Jundong Li, Tobias Schnabel
In recent years, neural models have been repeatedly touted to exhibit state-of-the-art performance in recommendation.
1 code implementation • 6 Jan 2023 • Song Wang, Yushun Dong, Kaize Ding, Chen Chen, Jundong Li
Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i. e., meta-training classes) and then generalize to classes with limited labeled nodes (i. e., meta-test classes).
1 code implementation • 3 Jan 2023 • Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, Jundong Li
Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i. e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i. e., the teacher GNN model).
1 code implementation • 3 Jan 2023 • Yaochen Zhu, Jing Ma, Jundong Li
Traditional RSs estimate user interests and predict their future behaviors by utilizing correlations in the observational historical activities, their profiles, and the content of interacted items.
1 code implementation • 11 Dec 2022 • Zhen Tan, Song Wang, Kaize Ding, Jundong Li, Huan Liu
More recently, inspired by the development of graph self-supervised learning, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed.
1 code implementation • 25 Nov 2022 • Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
1 code implementation • 21 Oct 2022 • Song Wang, Chen Chen, Jundong Li
Therefore, to adaptively learn node representations across meta-tasks, we propose a novel framework that learns a task-specific structure for each meta-task.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
no code implementations • 30 Sep 2022 • Chunhui Zhang, Hongfu Liu, Jundong Li, Yanfang Ye, Chuxu Zhang
Later, the trained encoder is frozen as a teacher model to distill a student model with a contrastive loss.
1 code implementation • 17 Aug 2022 • Zhenyu Lei, Herun Wan, Wenqian Zhang, Shangbin Feng, Zilong Chen, Jundong Li, Qinghua Zheng, Minnan Luo
In addition, given the stealing behavior of novel Twitter bots, BIC proposes to model semantic consistency in tweets based on attention weights while using it to augment the decision process.
1 code implementation • 16 Aug 2022 • Zhaoxuan Tan, Zilong Chen, Shangbin Feng, Qingyue Zhang, Qinghua Zheng, Jundong Li, Minnan Luo
Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the \textit{de-facto} standard for knowledge graph completion.
no code implementations • 24 Jul 2022 • Xingbo Fu, Binchi Zhang, Yushun Dong, Chen Chen, Jundong Li
Federated Graph Machine Learning (FGML) is a promising solution to tackle this challenge by training graph machine learning models in a federated manner.
no code implementations • 7 Jul 2022 • Jing Ma, Mengting Wan, Longqi Yang, Jundong Li, Brent Hecht, Jaime Teevan
Hypergraphs provide an effective abstraction for modeling multi-way group interactions among nodes, where each hyperedge can connect any number of nodes.
1 code implementation • 24 Jun 2022 • Yushun Dong, Song Wang, Yu Wang, Tyler Derr, Jundong Li
The low transparency on how the structure of the input network influences the bias in GNN outcome largely limits the safe adoption of GNNs in various decision-critical scenarios.
1 code implementation • 23 Jun 2022 • Song Wang, Kaize Ding, Chuxu Zhang, Chen Chen, Jundong Li
Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules.
2 code implementations • 21 Jun 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights.
1 code implementation • 9 Jun 2022 • Shangbin Feng, Zhaoxuan Tan, Herun Wan, Ningnan Wang, Zilong Chen, Binchi Zhang, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang, Xinshun Feng, Qingyue Zhang, Hongrui Wang, YuHan Liu, Yuyang Bai, Heng Wang, Zijian Cai, Yanbo Wang, Lijing Zheng, Zihan Ma, Jundong Li, Minnan Luo
Twitter bot detection has become an increasingly important task to combat misinformation, facilitate social media moderation, and preserve the integrity of the online discourse.
1 code implementation • 7 Jun 2022 • Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation.
1 code implementation • 20 May 2022 • Qinghua Zheng, Jihong Wang, Minnan Luo, YaoLiang Yu, Jundong Li, Lina Yao, Xiaojun Chang
Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "\emph{which fraction of the input graph is the most crucial to decide the model's decision?}"
1 code implementation • 5 May 2022 • Song Wang, Yushun Dong, Xiao Huang, Chen Chen, Jundong Li
Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set.
no code implementations • 24 Apr 2022 • Zheng Huang, Jing Ma, Yushun Dong, Natasha Zhang Foutz, Jundong Li
Noticeably, LBSNs have offered unparalleled access to abundant heterogeneous relational information about users and POIs (including user-user social relations, such as families or colleagues; and user-POI visiting relations).
2 code implementations • 21 Apr 2022 • Yushun Dong, Jing Ma, Song Wang, Chen Chen, Jundong Li
Recently, algorithmic fairness has been extensively studied in graph-based applications.
1 code implementation • NAACL 2022 • Wenqian Zhang, Shangbin Feng, Zilong Chen, Zhenyu Lei, Jundong Li, Minnan Luo
Previous approaches generally focus on leveraging textual content to identify stances, while they fail to reason with background knowledge or leverage the rich semantic and syntactic textual labels in news articles.
1 code implementation • 29 Mar 2022 • Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Jundong Li, Zi Huang
In recent years, neural architecture-based recommender systems have achieved tremendous success, but they still fall short of expectation when dealing with highly sparse data.
no code implementations • 17 Mar 2022 • Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu
In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge.
no code implementations • 13 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
no code implementations • 21 Jan 2022 • Jihong Wang, Minnan Luo, Jundong Li, Ziqi Liu, Jun Zhou, Qinghua Zheng
Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph.
1 code implementation • 10 Jan 2022 • Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, Jundong Li
In this framework, we generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
no code implementations • 26 Oct 2021 • Nan Wang, Lu Lin, Jundong Li, Hongning Wang
In this paper, we propose a principled new way for unbiased graph embedding by learning node embeddings from an underlying bias-free graph, which is not influenced by sensitive node attributes.
no code implementations • 29 Sep 2021 • Han Yue, Jundong Li, Hongfu Liu
Unsupervised feature selection aims to select a subset from the original features that are most useful for the downstream tasks without external guidance information.
1 code implementation • 9 Sep 2021 • Junwei Zhang, Min Gao, Junliang Yu, Lei Guo, Jundong Li, Hongzhi Yin
Technically, for (1), a hierarchical hypergraph convolutional network based on the user- and group-level hypergraphs is developed to model the complex tuplewise correlations among users within and beyond groups.
1 code implementation • 11 Aug 2021 • Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li
We then develop a framework EDITS to mitigate the bias in attributed networks while maintaining the performance of GNNs in downstream tasks.
no code implementations • 12 Jun 2021 • Kaize Ding, Jianling Wang, Jundong Li, James Caverlee, Huan Liu
Graphs are widely used to model the relational structure of data, and the research of graph machine learning (ML) has a wide spectrum of applications ranging from drug design in molecular graphs to friendship recommendation in social networks.
no code implementations • 4 Jun 2021 • Xiaoying Xing, Hongfu Liu, Chen Chen, Jundong Li
Feature selection is a prevalent data preprocessing paradigm for various learning tasks.
1 code implementation • 29 May 2021 • Jing Ma, Yushun Dong, Zheng Huang, Daniel Mietchen, Jundong Li
Besides, as the confounders may be time-varying during COVID-19 (e. g., vigilance of residents changes in the course of the pandemic), it is even more difficult to capture them.
1 code implementation • 26 Apr 2021 • Yushun Dong, Kaize Ding, Brian Jalaian, Shuiwang Ji, Jundong Li
Existing efforts can be mainly categorized as spectral-based and spatial-based methods.
no code implementations • 27 Feb 2021 • Yitong Li, Duoduo Liao, Jundong Li, Wenying Ji
When a disaster occurs, maintaining and restoring community lifelines subsequently require collective efforts from various stakeholders.
4 code implementations • 16 Jan 2021 • Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang
In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations.
1 code implementation • EMNLP 2020 • Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, Huan Liu
Text classification is a critical research topic with broad applications in natural language processing.
1 code implementation • 20 Oct 2020 • Lei Cai, Jundong Li, Jie Wang, Shuiwang Ji
In this formalism, a link prediction problem is converted to a graph classification task.
1 code implementation • 23 Jun 2020 • Kaize Ding, Jianling Wang, Jundong Li, Kai Shu, Chenghao Liu, Huan Liu
By constructing a pool of semi-supervised node classification tasks to mimic the real test environment, GPN is able to perform \textit{meta-learning} on an attributed network and derive a highly generalizable model for handling the target classification task.
1 code implementation • 22 Apr 2020 • Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua Zheng
Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
no code implementations • 5 Apr 2020 • Junliang Yu, Hongzhi Yin, Jundong Li, Min Gao, Zi Huang, Lizhen Cui
Social recommender systems are expected to improve recommendation quality by incorporating social information when there is little user-item interaction data.
no code implementations • 5 Mar 2020 • Min Gao, Junwei Zhang, Junliang Yu, Jundong Li, Junhao Wen, Qingyu Xiong
In general, two lines of research have been conducted, and their common ideas can be summarized as follows: (1) for the data noise issue, adversarial perturbations and adversarial sampling-based training often serve as a solution; (2) for the data sparsity issue, data augmentation--implemented by capturing the distribution of real data under the minimax framework--is the primary coping strategy.
no code implementations • 22 Dec 2019 • Ruocheng Guo, Jundong Li, Huan Liu
When such data comes with network information, the later can be potentially useful to correct hidden confounding bias.
no code implementations • 8 Sep 2019 • Junliang Yu, Min Gao, Hongzhi Yin, Jundong Li, Chongming Gao, Qinyong Wang
Most of the recent studies of social recommendation assume that people share similar preferences with their friends and the online social relations are helpful in improving traditional recommender systems.
no code implementations • 19 Aug 2019 • Kaize Ding, Yichuan Li, Jundong Li, Chenghao Liu, Huan Liu
Inspired by the immense success of deep learning, graph neural networks (GNNs) are widely used to learn powerful node representations and have demonstrated promising performance on different graph learning tasks.
no code implementations • 11 Aug 2019 • Yuening Li, Xiao Huang, Jundong Li, Mengnan Du, Na Zou
SpecAE leverages Laplacian sharpening to amplify the distances between representations of anomalies and the ones of the majority.
no code implementations • 11 Aug 2019 • Yuening Li, Ninghao Liu, Jundong Li, Mengnan Du, Xia Hu
To this end, we propose a novel deep structured anomaly detection framework to identify the cross-modal anomalies embedded in the data.
1 code implementation • 17 Jul 2019 • Yuxin Ma, Tiankai Xie, Jundong Li, Ross Maciejewski
Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks.
1 code implementation • 8 Jun 2019 • Ruocheng Guo, Jundong Li, Huan Liu
In fact, an important fact ignored by the majority of previous work is that observational data can come with network information that can be utilized to infer hidden confounders.
2 code implementations • 2019 SIAM International Conference on Data Mining (SDM) 2019 • Kaize Ding, Jundong Li, Rohit Bhanushali, Huan Liu
In particular, our proposed deep model: (1) explicitly models the topological structure and nodal attributes seamlessly for node embedding learning with the prevalent graph convolutional network (GCN); and (2) is customized to address the anomaly detection problem by virtue of deep autoencoder that leverages the learned embeddings to reconstruct the original data.
no code implementations • 25 Nov 2018 • Binbin Liu, Jundong Li, Yunquan Song, Xijun Liang, Ling Jian, Huan Liu
In particular, we extend the ONS algorithm with the trick of expected gradient and develop a novel second-order online learning algorithm, i. e., Online Newton Step with Expected Gradient (ONSEG).
3 code implementations • 25 Sep 2018 • Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, Huan Liu
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations.
2 code implementations • ASONAM 2019 2019 • Jundong Li, Liang Wu, Huan Liu
As opposed to manual feature engineering which is tedious and difficult to scale, network representation learning has attracted a surge of research interests as it automates the process of feature learning on graphs.
no code implementations • 6 Jun 2017 • Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu
To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.
no code implementations • 7 Nov 2016 • Jundong Li, Huan Liu
We are surrounded by huge amounts of large-scale high dimensional data.
2 code implementations • 29 Jan 2016 • Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, Huan Liu
To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).