no code implementations • 2 Apr 2025 • Zihan Chen, Song Wang, Zhen Tan, Xingbo Fu, Zhenyu Lei, Peng Wang, Huan Liu, Cong Shen, Jundong Li
The rapid advancements in large Language models (LLMs) have significantly enhanced their reasoning capabilities, driven by various strategies such as multi-agent collaboration.
1 code implementation • 30 Mar 2025 • Haochen Liu, Song Wang, Chen Chen, Jundong Li
Recent approaches leverage Graph Neural Networks (GNNs) to generate KG-based input embedding prefixes as soft prompts for LLMs but fail to account for question relevance, resulting in noisy prompts.
no code implementations • 28 Mar 2025 • Song Wang, Junhong Lin, Xiaojie Guo, Julian Shun, Jundong Li, Yada Zhu
In this paper, we propose the ReKnoS framework, which aims to Reason over Knowledge Graphs with Super-Relations.
1 code implementation • 19 Feb 2025 • Yaochen Zhu, Chao Wan, Harald Steck, Dawen Liang, Yesu Feng, Nathan Kallus, Jundong Li
Conversational recommender systems (CRS) aim to provide personalized recommendations via interactive dialogues with users.
1 code implementation • 1 Feb 2025 • Binchi Zhang, Zaiyi Zheng, Zhengzhang Chen, Jundong Li
In this paper, we introduce rotation symmetry, a novel form of parameter space symmetry for transformers that generalizes permutation symmetry by rotating parameter matrices in self-attention layers.
no code implementations • 31 Jan 2025 • Gyuseok Lee, Yaochen Zhu, Hwanjo Yu, Yao Zhou, Jundong Li
Diffusion-based recommender systems (DR) have gained increasing attention for their advanced generative and denoising capabilities.
no code implementations • 31 Jan 2025 • Binchi Zhang, Zhengzhang Chen, Zaiyi Zheng, Jundong Li, Haifeng Chen
Extensive experiments demonstrate the effectiveness of LOKA in LLM knowledge updating tasks.
no code implementations • 12 Jan 2025 • Zhenyu Lei, Yushun Dong, Weiyu Li, Rong Ding, Qi Wang, Jundong Li
Large language models (LLMs) have revolutionized scientific research with their exceptional capabilities and transformed various fields.
no code implementations • 7 Jan 2025 • Song Wang, Xiaodong Yang, Rashidul Islam, Huiyuan Chen, Minghua Xu, Jundong Li, Yiwei Cai
These methods often employ a two-step strategy that first creates augmented environments and subsequently identifies invariant subgraphs to improve generalizability.
no code implementations • 6 Jan 2025 • Zaiyi Zheng, Yushun Dong, Song Wang, Haochen Liu, Qi Wang, Jundong Li
Large Language Models (LLMs) have shown impressive performance in various tasks, including knowledge graph completion (KGC).
1 code implementation • 26 Dec 2024 • Xingbo Fu, Zihan Chen, Yinhan He, Song Wang, Binchi Zhang, Chen Chen, Jundong Li
In the real world, however, the graph data can suffer from significant distribution shifts across clients as the clients may collect their graph data for different purposes.
2 code implementations • 23 Dec 2024 • Song Wang, Zhenyu Lei, Zhen Tan, Jiaqi Ding, Xinyu Zhao, Yushun Dong, Guorong Wu, Tianlong Chen, Chen Chen, Aiying Zhang, Jundong Li
As such, conventional GNNs struggle to learn from these pathways due to the long-range dependencies of multiple pathways.
1 code implementation • 14 Dec 2024 • Zhenyu Lei, Yushun Dong, Jundong Li, Chen Chen
However, in real-world applications, most nodes may not possess any available temporal data during training.
1 code implementation • 10 Dec 2024 • Yushun Dong, Patrick Soga, Yinhan He, Song Wang, Jundong Li
To demystify such a conflict, this paper introduces a comprehensive benchmark to measure and evaluate GNNs' capability in capturing and leveraging the information encoded in different frequency components of the input graph data.
no code implementations • 26 Nov 2024 • Jiazheng Li, Jundong Li, Chuxu Zhang
Graph neural networks stand as the predominant technique for graph representation learning owing to their strong expressive power, yet the performance highly depends on the availability of high-quality labels in an end-to-end manner.
1 code implementation • 19 Nov 2024 • Tonmoy Hossain, Jing Ma, Jundong Li, Miaomiao Zhang
In this paper, we introduce a novel framework that for the first time develops invariant shape representation learning (ISRL) to further strengthen the robustness of image classifiers.
no code implementations • 13 Nov 2024 • Xingbo Fu, Song Wang, Yushun Dong, Binchi Zhang, Chen Chen, Jundong Li
To enable structure knowledge transfer, we design a GNN model and a feature encoder on each client.
2 code implementations • 25 Oct 2024 • Kexin Zhang, Shuhan Liu, Song Wang, Weili Shi, Chen Chen, Pan Li, Sheng Li, Jundong Li, Kaize Ding
Consequently, there has been a surge in research on graph machine learning under distribution shifts, aiming to train models to achieve satisfactory performance on out-of-distribution (OOD) test data.
no code implementations • 25 Oct 2024 • Yinhan He, Wendy Zheng, Yaochen Zhu, Jing Ma, Saumitra Mishra, Natraj Raman, Ninghao Liu, Jundong Li
Methodologically, we design a significant subgraph generator and a counterfactual subgraph autoencoder in our GlobalGCE, where the subgraphs and the rules can be effectively generated.
1 code implementation • 19 Oct 2024 • Yinhan He, Zaiyi Zheng, Patrick Soga, Yaozhen Zhu, Yushun Dong, Jundong Li
In recent years, Graph Neural Networks (GNNs) have become successful in molecular property prediction tasks such as toxicity analysis.
no code implementations • 16 Oct 2024 • Zihan Chen, Bike Xie, Jundong Li, Cong Shen
Large Language Models (LLMs) have demonstrated remarkable success across a wide range of language tasks, but their deployment on edge devices remains challenging due to the substantial memory requirements imposed by their large parameter sizes.
1 code implementation • 18 Aug 2024 • Xingbo Fu, Zihan Chen, Binchi Zhang, Chen Chen, Jundong Li
Moreover, FGL also encounters a unique challenge for the node classification task: the nodes from a minority class in a client are more likely to have biased neighboring information, which prevents FGL from learning expressive node embeddings with Graph Neural Networks (GNNs).
no code implementations • 8 Aug 2024 • Yaochen Zhu, Liang Wu, Binchi Zhang, Song Wang, Qi Guo, Liangjie Hong, Luke Simon, Jundong Li
Job marketplace is a heterogeneous graph composed of interactions among members (job-seekers), companies, and jobs.
1 code implementation • 1 Aug 2024 • Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li
In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees.
1 code implementation • 1 Aug 2024 • Binchi Zhang, Zihan Chen, Cong Shen, Jundong Li
These strategies enable data owners to ascertain whether their target data has been effectively unlearned from the model.
no code implementations • 28 Jul 2024 • Yushun Dong, Binchi Zhang, Zhenyu Lei, Na Zou, Jundong Li
Specifically, we first instantiate four types of unlearning requests on graphs, and then we propose an approximation approach to flexibly handle these unlearning requests over diverse GNNs.
no code implementations • 16 Jul 2024 • Yushun Dong, Song Wang, Zhenyu Lei, Zaiyi Zheng, Jing Ma, Chen Chen, Jundong Li
Fairness-aware graph learning has gained increasing attention in recent years.
no code implementations • 2 Jul 2024 • Song Wang, Peng Wang, Tong Zhou, Yushun Dong, Zhen Tan, Jundong Li
To address these limitations, we collect a variety of datasets designed for the bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation Benchmark that covers different types of bias across different social groups and tasks.
no code implementations • 26 Jun 2024 • Zhen Tan, Chengshuai Zhao, Raha Moraffah, YiFan Li, Song Wang, Jundong Li, Tianlong Chen, Huan Liu
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs) by integrating external knowledge bases, improving their performance in applications like fact-checking and information searching.
no code implementations • 20 Jun 2024 • Yaochen Zhu, Yinhan He, Jing Ma, Mengxuan Hu, Sheng Li, Jundong Li
Depending on the type of unobserved variables and the specific CI task, various consequences can be incurred if these latent variables are carelessly handled, such as biased estimation of causal effects, incomplete understanding of causal mechanisms, lack of individual-level causal consideration, etc.
1 code implementation • 19 Jun 2024 • Haochen Liu, Song Wang, Chen Chen, Jundong Li
To overcome these challenges, we propose SAFER (Subgraph Adaptation for Few-shot Relational Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs generated from support and query triplets to perform the prediction.
1 code implementation • 19 Jun 2024 • Haochen Liu, Song Wang, Yaochen Zhu, Yushun Dong, Jundong Li
In addition, LLMs tend to pick only knowledge with direct semantic relationship with the input text, while potentially useful knowledge with indirect semantics can be ignored.
no code implementations • 13 Jun 2024 • Alexi Gladstone, Ganesh Nanduru, Md Mofijul Islam, Aman Chadha, Jundong Li, Tariq Iqbal
One of the predominant methods for training world models is autoregressive prediction in the output space of the next element of a sequence.
no code implementations • 6 Jun 2024 • Zihan Chen, Song Wang, Cong Shen, Jundong Li
By aggregating nodes from diverse pieces and annotating the corresponding instances, we identify a set of diverse and representative instances for ICL.
no code implementations • 27 May 2024 • Mucong Ding, Yinhan He, Jundong Li, Furong Huang
However, owing to the interdependence of graph nodes, coreset selection, which selects subsets of the data examples, has not been successfully applied to speed up GNN training on large graphs, warranting special treatment.
no code implementations • 17 May 2024 • Song Wang, Yushun Dong, Binchi Zhang, Zihan Chen, Xingbo Fu, Yinhan He, Cong Shen, Chuxu Zhang, Nitesh V. Chawla, Jundong Li
In this survey paper, we explore three critical aspects vital for enhancing safety in Graph ML: reliability, generalizability, and confidentiality.
1 code implementation • 13 Mar 2024 • Xuansheng Wu, Haiyan Zhao, Yaochen Zhu, Yucheng Shi, Fan Yang, Tianming Liu, Xiaoming Zhai, Wenlin Yao, Jundong Li, Mengnan Du, Ninghao Liu
Therefore, in this paper, we introduce Usable XAI in the context of LLMs by analyzing (1) how XAI can benefit LLMs and AI systems, and (2) how LLMs can contribute to the advancement of XAI.
no code implementations • 2 Mar 2024 • Song Wang, Zhen Tan, Xinyu Zhao, Tianlong Chen, Huan Liu, Jundong Li
In contrast, in this work, we propose a novel self-conditioned graph generation framework designed to explicitly model graph distributions and employ these distributions to guide the generation process.
1 code implementation • 21 Feb 2024 • Zhen Tan, Dawei Li, Song Wang, Alimohammad Beigi, Bohan Jiang, Amrita Bhattacharjee, Mansooreh Karami, Jundong Li, Lu Cheng, Huan Liu
Furthermore, this survey includes an in-depth taxonomy of data types that LLMs can annotate, a comprehensive review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation and synthesis.
no code implementations • 15 Feb 2024 • Chengshuai Shi, Kun Yang, Zihan Chen, Jundong Li, Jing Yang, Cong Shen
TRIPLE is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB); thus, it is capable of leveraging the rich toolbox from BAI-FB systematically and also incorporating unique characteristics of prompt optimization.
no code implementations • 23 Dec 2023 • Zihan Chen, Jundong Li, Cong Shen
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions and mitigate the data scarcity issue.
1 code implementation • 8 Nov 2023 • Zhen Tan, Lu Cheng, Song Wang, Yuan Bo, Jundong Li, Huan Liu
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
1 code implementation • 5 Nov 2023 • Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks over the years.
no code implementations • 2 Nov 2023 • Song Wang, Zhen Tan, Ruocheng Guo, Jundong Li
Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language processing.
1 code implementation • 2 Nov 2023 • Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li
We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faithfully model user/item collaborative and content semantics.
no code implementations • 24 Oct 2023 • Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li
Afterward, we provide an innovative taxonomy of KME techniques based on how the new knowledge is introduced into pre-trained LLMs, and investigate existing KME strategies while analyzing key insights, advantages, and limitations of methods from each category.
no code implementations • 23 Oct 2023 • Xiaotian Han, Kaixiong Zhou, Ting-Hsiang Wang, Jundong Li, Fei Wang, Na Zou
Specifically, we first analyzed multiple graphs and observed that marginal nodes in graphs have a worse performance of downstream tasks than others in graph neural networks.
1 code implementation • 23 Oct 2023 • Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao, Jianling Sun
Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap.
1 code implementation • 20 Oct 2023 • Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e. g., female) in graph-based applications.
no code implementations • 28 Aug 2023 • Song Wang, Jing Ma, Lu Cheng, Jundong Li
These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks.
1 code implementation • 18 Aug 2023 • Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
By considering embeddings encompassing graph topology and attribute information as reconstruction targets, our model could capture more generalized and comprehensive knowledge.
1 code implementation • 22 Jul 2023 • Qiaoyu Tan, Xin Zhang, Xiao Huang, Hao Chen, Jundong Li, Xia Hu
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
no code implementations • 17 Jul 2023 • Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
no code implementations • 17 Jul 2023 • Jing Ma, Chen Chen, Anil Vullikanti, Ritwick Mishra, Gregory Madden, Daniel Borrajo, Jundong Li
In this paper, we study the problem of causal effect estimation with treatment entangled in a graph.
1 code implementation • 27 Jun 2023 • Song Wang, Zhen Tan, Huan Liu, Jundong Li
First, we propose to enhance the intra-class generalizability by involving a contrastive two-step optimization in each episode to explicitly align node embeddings in the same classes.
1 code implementation • 17 Jun 2023 • Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, Jundong Li
In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients.
1 code implementation • 5 Jun 2023 • Yaochen Zhu, Jing Ma, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li
But since sensitive features may also affect user interests in a fair manner (e. g., race on culture-based preferences), indiscriminately eliminating all the influences of sensitive features inevitably degenerate the recommendations quality and necessary diversities.
no code implementations • 2 May 2023 • Xingbo Fu, Chen Chen, Yushun Dong, Anil Vullikanti, Eili Klein, Gregory Madden, Jundong Li
In this paper, we propose a novel problem of antibiogram pattern prediction that aims to predict which patterns will appear in the future.
no code implementations • 2 May 2023 • Yushun Dong, Jundong Li, Tobias Schnabel
In recent years, neural models have been repeatedly touted to exhibit state-of-the-art performance in recommendation.
1 code implementation • 6 Jan 2023 • Song Wang, Yushun Dong, Kaize Ding, Chen Chen, Jundong Li
Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i. e., meta-training classes) and then generalize to classes with limited labeled nodes (i. e., meta-test classes).
1 code implementation • 3 Jan 2023 • Yaochen Zhu, Jing Ma, Jundong Li
Traditional RSs estimate user interests and predict their future behaviors by utilizing correlations in the observational historical activities, their profiles, and the content of interacted items.
1 code implementation • 3 Jan 2023 • Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, Jundong Li
Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i. e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i. e., the teacher GNN model).
1 code implementation • 11 Dec 2022 • Zhen Tan, Song Wang, Kaize Ding, Jundong Li, Huan Liu
More recently, inspired by the development of graph self-supervised learning, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed.
1 code implementation • 25 Nov 2022 • Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
1 code implementation • 21 Oct 2022 • Song Wang, Chen Chen, Jundong Li
Therefore, to adaptively learn node representations across meta-tasks, we propose a novel framework that learns a task-specific structure for each meta-task.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
no code implementations • 30 Sep 2022 • Chunhui Zhang, Hongfu Liu, Jundong Li, Yanfang Ye, Chuxu Zhang
Later, the trained encoder is frozen as a teacher model to distill a student model with a contrastive loss.
1 code implementation • 17 Aug 2022 • Zhenyu Lei, Herun Wan, Wenqian Zhang, Shangbin Feng, Zilong Chen, Jundong Li, Qinghua Zheng, Minnan Luo
In addition, given the stealing behavior of novel Twitter bots, BIC proposes to model semantic consistency in tweets based on attention weights while using it to augment the decision process.
1 code implementation • 16 Aug 2022 • Zhaoxuan Tan, Zilong Chen, Shangbin Feng, Qingyue Zhang, Qinghua Zheng, Jundong Li, Minnan Luo
Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the \textit{de-facto} standard for knowledge graph completion.
no code implementations • 24 Jul 2022 • Xingbo Fu, Binchi Zhang, Yushun Dong, Chen Chen, Jundong Li
Federated Graph Machine Learning (FGML) is a promising solution to tackle this challenge by training graph machine learning models in a federated manner.
no code implementations • 7 Jul 2022 • Jing Ma, Mengting Wan, Longqi Yang, Jundong Li, Brent Hecht, Jaime Teevan
Hypergraphs provide an effective abstraction for modeling multi-way group interactions among nodes, where each hyperedge can connect any number of nodes.
1 code implementation • 24 Jun 2022 • Yushun Dong, Song Wang, Yu Wang, Tyler Derr, Jundong Li
The low transparency on how the structure of the input network influences the bias in GNN outcome largely limits the safe adoption of GNNs in various decision-critical scenarios.
1 code implementation • 23 Jun 2022 • Song Wang, Kaize Ding, Chuxu Zhang, Chen Chen, Jundong Li
Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules.
2 code implementations • 21 Jun 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights.
1 code implementation • 9 Jun 2022 • Shangbin Feng, Zhaoxuan Tan, Herun Wan, Ningnan Wang, Zilong Chen, Binchi Zhang, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang, Xinshun Feng, Qingyue Zhang, Hongrui Wang, YuHan Liu, Yuyang Bai, Heng Wang, Zijian Cai, Yanbo Wang, Lijing Zheng, Zihan Ma, Jundong Li, Minnan Luo
Twitter bot detection has become an increasingly important task to combat misinformation, facilitate social media moderation, and preserve the integrity of the online discourse.
1 code implementation • 7 Jun 2022 • Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation.
1 code implementation • 20 May 2022 • Qinghua Zheng, Jihong Wang, Minnan Luo, YaoLiang Yu, Jundong Li, Lina Yao, Xiaojun Chang
Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "\emph{which fraction of the input graph is the most crucial to decide the model's decision?}"
1 code implementation • 5 May 2022 • Song Wang, Yushun Dong, Xiao Huang, Chen Chen, Jundong Li
Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set.
no code implementations • 24 Apr 2022 • Zheng Huang, Jing Ma, Yushun Dong, Natasha Zhang Foutz, Jundong Li
Noticeably, LBSNs have offered unparalleled access to abundant heterogeneous relational information about users and POIs (including user-user social relations, such as families or colleagues; and user-POI visiting relations).
2 code implementations • 21 Apr 2022 • Yushun Dong, Jing Ma, Song Wang, Chen Chen, Jundong Li
Recently, algorithmic fairness has been extensively studied in graph-based applications.
1 code implementation • NAACL 2022 • Wenqian Zhang, Shangbin Feng, Zilong Chen, Zhenyu Lei, Jundong Li, Minnan Luo
Previous approaches generally focus on leveraging textual content to identify stances, while they fail to reason with background knowledge or leverage the rich semantic and syntactic textual labels in news articles.
1 code implementation • 29 Mar 2022 • Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Jundong Li, Zi Huang
In recent years, neural architecture-based recommender systems have achieved tremendous success, but they still fall short of expectation when dealing with highly sparse data.
no code implementations • 17 Mar 2022 • Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu
In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge.
no code implementations • 13 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
no code implementations • 21 Jan 2022 • Jihong Wang, Minnan Luo, Jundong Li, Ziqi Liu, Jun Zhou, Qinghua Zheng
Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph.
1 code implementation • 10 Jan 2022 • Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, Jundong Li
In this framework, we generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
no code implementations • 26 Oct 2021 • Nan Wang, Lu Lin, Jundong Li, Hongning Wang
In this paper, we propose a principled new way for unbiased graph embedding by learning node embeddings from an underlying bias-free graph, which is not influenced by sensitive node attributes.
no code implementations • 29 Sep 2021 • Han Yue, Jundong Li, Hongfu Liu
Unsupervised feature selection aims to select a subset from the original features that are most useful for the downstream tasks without external guidance information.
1 code implementation • 9 Sep 2021 • Junwei Zhang, Min Gao, Junliang Yu, Lei Guo, Jundong Li, Hongzhi Yin
Technically, for (1), a hierarchical hypergraph convolutional network based on the user- and group-level hypergraphs is developed to model the complex tuplewise correlations among users within and beyond groups.
1 code implementation • 11 Aug 2021 • Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li
We then develop a framework EDITS to mitigate the bias in attributed networks while maintaining the performance of GNNs in downstream tasks.
no code implementations • 12 Jun 2021 • Kaize Ding, Jianling Wang, Jundong Li, James Caverlee, Huan Liu
Graphs are widely used to model the relational structure of data, and the research of graph machine learning (ML) has a wide spectrum of applications ranging from drug design in molecular graphs to friendship recommendation in social networks.
no code implementations • 4 Jun 2021 • Xiaoying Xing, Hongfu Liu, Chen Chen, Jundong Li
Feature selection is a prevalent data preprocessing paradigm for various learning tasks.
1 code implementation • 29 May 2021 • Jing Ma, Yushun Dong, Zheng Huang, Daniel Mietchen, Jundong Li
Besides, as the confounders may be time-varying during COVID-19 (e. g., vigilance of residents changes in the course of the pandemic), it is even more difficult to capture them.
1 code implementation • 26 Apr 2021 • Yushun Dong, Kaize Ding, Brian Jalaian, Shuiwang Ji, Jundong Li
Existing efforts can be mainly categorized as spectral-based and spatial-based methods.
no code implementations • 27 Feb 2021 • Yitong Li, Duoduo Liao, Jundong Li, Wenying Ji
When a disaster occurs, maintaining and restoring community lifelines subsequently require collective efforts from various stakeholders.
4 code implementations • 16 Jan 2021 • Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang
In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations.
1 code implementation • EMNLP 2020 • Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, Huan Liu
Text classification is a critical research topic with broad applications in natural language processing.
2 code implementations • 20 Oct 2020 • Lei Cai, Jundong Li, Jie Wang, Shuiwang Ji
In this formalism, a link prediction problem is converted to a graph classification task.
2 code implementations • 23 Jun 2020 • Kaize Ding, Jianling Wang, Jundong Li, Kai Shu, Chenghao Liu, Huan Liu
By constructing a pool of semi-supervised node classification tasks to mimic the real test environment, GPN is able to perform \textit{meta-learning} on an attributed network and derive a highly generalizable model for handling the target classification task.
1 code implementation • 22 Apr 2020 • Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua Zheng
Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
no code implementations • 5 Apr 2020 • Junliang Yu, Hongzhi Yin, Jundong Li, Min Gao, Zi Huang, Lizhen Cui
Social recommender systems are expected to improve recommendation quality by incorporating social information when there is little user-item interaction data.
no code implementations • 5 Mar 2020 • Min Gao, Junwei Zhang, Junliang Yu, Jundong Li, Junhao Wen, Qingyu Xiong
In general, two lines of research have been conducted, and their common ideas can be summarized as follows: (1) for the data noise issue, adversarial perturbations and adversarial sampling-based training often serve as a solution; (2) for the data sparsity issue, data augmentation--implemented by capturing the distribution of real data under the minimax framework--is the primary coping strategy.
no code implementations • 22 Dec 2019 • Ruocheng Guo, Jundong Li, Huan Liu
When such data comes with network information, the later can be potentially useful to correct hidden confounding bias.
no code implementations • 8 Sep 2019 • Junliang Yu, Min Gao, Hongzhi Yin, Jundong Li, Chongming Gao, Qinyong Wang
Most of the recent studies of social recommendation assume that people share similar preferences with their friends and the online social relations are helpful in improving traditional recommender systems.
no code implementations • 19 Aug 2019 • Kaize Ding, Yichuan Li, Jundong Li, Chenghao Liu, Huan Liu
Inspired by the immense success of deep learning, graph neural networks (GNNs) are widely used to learn powerful node representations and have demonstrated promising performance on different graph learning tasks.
no code implementations • 11 Aug 2019 • Yuening Li, Ninghao Liu, Jundong Li, Mengnan Du, Xia Hu
To this end, we propose a novel deep structured anomaly detection framework to identify the cross-modal anomalies embedded in the data.
no code implementations • 11 Aug 2019 • Yuening Li, Xiao Huang, Jundong Li, Mengnan Du, Na Zou
SpecAE leverages Laplacian sharpening to amplify the distances between representations of anomalies and the ones of the majority.
1 code implementation • 17 Jul 2019 • Yuxin Ma, Tiankai Xie, Jundong Li, Ross Maciejewski
Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks.
1 code implementation • 8 Jun 2019 • Ruocheng Guo, Jundong Li, Huan Liu
In fact, an important fact ignored by the majority of previous work is that observational data can come with network information that can be utilized to infer hidden confounders.
2 code implementations • 2019 SIAM International Conference on Data Mining (SDM) 2019 • Kaize Ding, Jundong Li, Rohit Bhanushali, Huan Liu
In particular, our proposed deep model: (1) explicitly models the topological structure and nodal attributes seamlessly for node embedding learning with the prevalent graph convolutional network (GCN); and (2) is customized to address the anomaly detection problem by virtue of deep autoencoder that leverages the learned embeddings to reconstruct the original data.
no code implementations • 25 Nov 2018 • Binbin Liu, Jundong Li, Yunquan Song, Xijun Liang, Ling Jian, Huan Liu
In particular, we extend the ONS algorithm with the trick of expected gradient and develop a novel second-order online learning algorithm, i. e., Online Newton Step with Expected Gradient (ONSEG).
3 code implementations • 25 Sep 2018 • Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, Huan Liu
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations.
2 code implementations • ASONAM 2019 2019 • Jundong Li, Liang Wu, Huan Liu
As opposed to manual feature engineering which is tedious and difficult to scale, network representation learning has attracted a surge of research interests as it automates the process of feature learning on graphs.
no code implementations • 6 Jun 2017 • Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu
To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.
no code implementations • 7 Nov 2016 • Jundong Li, Huan Liu
We are surrounded by huge amounts of large-scale high dimensional data.
2 code implementations • 29 Jan 2016 • Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, Huan Liu
To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).