1 code implementation • 25 Mar 2025 • Yuan Li, Jun Hu, Jiaxin Jiang, Zemin Liu, Bryan Hooi, Bingsheng He
Recent advances in graph learning have paved the way for innovative retrieval-augmented generation (RAG) systems that leverage the inherent relational structures in graph data.
Ranked #1 on
Modality completion
on Amazon Baby
1 code implementation • 13 Feb 2025 • Xiang Liu, Zhenheng Tang, Xia Li, Yijun Song, Sijie Ji, Zemin Liu, Bo Han, Linshan Jiang, Jialin Li
One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL).
no code implementations • 13 Feb 2025 • Xiang Liu, Mingchen Li, Xia Li, Leigang Qu, Zifan Peng, Yijun Song, Zemin Liu, Linshan Jiang, Jialin Li
For instance, using the VGG16 model on the CIFAR-10 dataset, we achieve a parameter reduction of 85%, a decrease in FLOPs by 61%, and maintain an accuracy of 94. 10% (0. 14% higher than the original model); we reduce the parameters by 55% with the accuracy at 76. 12% using the ResNet architecture on ImageNet (only drops 0. 03%).
1 code implementation • 22 Jan 2025 • Yongduo Sui, Jie Sun, Shuyao Wang, Zemin Liu, Qing Cui, Longfei Li, Xiang Wang
It provides a unified perspective on invariant graph learning, emphasizing both structural and semantic invariance principles to identify more robust stable features.
no code implementations • 16 Nov 2024 • Wei Zhuo, Zemin Liu, Bryan Hooi, Bingsheng He, Guang Tan, Rizal Fathony, Jia Chen
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.
no code implementations • 15 Oct 2024 • Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li
We conduct extensive experiments on five datasets with three model structures, demonstrating that our approach significantly reduces inference latency on edge devices and achieves a model size reduction of up to 28. 9 times and 34. 1 times, respectively, while maintaining test accuracy comparable to the original Vision Transformer.
no code implementations • 20 Feb 2024 • Qian Wang, Zemin Liu, Zhen Zhang, Bingsheng He
Class imbalance in graph-structured data, where minor classes are significantly underrepresented, poses a critical challenge for Graph Neural Networks (GNNs).
no code implementations • 4 Feb 2024 • Qiheng Mao, Zemin Liu, Chenghao Liu, Zhuo Li, Jianling Sun
This collaboration harnesses the sophisticated linguistic capabilities of LLMs to improve the contextual understanding and adaptability of graph models, thereby broadening the scope and potential of GRL.
1 code implementation • 2 Feb 2024 • Xingtong Yu, Yuan Fang, Zemin Liu, Yuxia Wu, Zhihao Wen, Jianyuan Bo, Xinming Zhang, Steven C. H. Hoi
The techniques can be broadly categorized into meta-learning, pre-training, and hybrid approaches, with a finer-grained classification in each category to aid readers in their method selection process.
no code implementations • 4 Dec 2023 • Xingtong Yu, Yuan Fang, Zemin Liu, Xinming Zhang
In this paper, we propose HGPROMPT, a novel pre-training and prompting framework to unify not only pre-training and downstream tasks but also homogeneous and heterogeneous graphs via a dual-template design.
2 code implementations • 26 Nov 2023 • Xingtong Yu, Zhenghao Liu, Yuan Fang, Zemin Liu, Sihong Chen, Xinming Zhang
In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs.
1 code implementation • 23 Oct 2023 • Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao, Jianling Sun
Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap.
1 code implementation • 2 Oct 2023 • Qian Wang, Zhen Zhang, Zemin Liu, Shengliang Lu, Bingqiao Luo, Bingsheng He
While numerous public blockchain datasets are available, their utility is constrained by an exclusive focus on blockchain data.
1 code implementation • 27 Sep 2023 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun
Unbiased Learning to Rank (ULTR) aims to train unbiased ranking models from biased click logs, by explicitly modeling a generation process for user behavior and fitting click data based on examination hypothesis.
1 code implementation • 26 Aug 2023 • Zemin Liu, Yuan Li, Nan Chen, Qian Wang, Bryan Hooi, Bingsheng He
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce, thereby leading to biased learning outcomes.
1 code implementation • 22 Feb 2023 • Qiheng Mao, Zemin Liu, Chenghao Liu, Jianling Sun
To bridge this gap, in this paper we investigate the representation learning on HINs with Graph Transformer, and propose a novel model named HINormer, which capitalizes on a larger-range aggregation mechanism for node representation learning.
1 code implementation • 21 Feb 2023 • Trung-Kien Nguyen, Zemin Liu, Yuan Fang
Assuming no type information is given, we define a so-called latent heterogeneous graph (LHG), which carries latent heterogeneous semantics as the node/edge types cannot be observed.
2 code implementations • 16 Feb 2023 • Zemin Liu, Xingtong Yu, Yuan Fang, Xinming Zhang
In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner.
1 code implementation • 8 Feb 2023 • Zemin Liu, Trung-Kien Nguyen, Yuan Fang
In particular, the varying neighborhood structures across nodes, manifesting themselves in drastically different node degrees, give rise to the diverse behaviors of nodes and biased outcomes.
1 code implementation • 7 Feb 2023 • Xingtong Yu, Zemin Liu, Yuan Fang, Xinming Zhang
However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting.
no code implementations • 4 Aug 2022 • Fanwei Zhu, Wendong Xiao, Yao Yu, Ziyi Wang, Zulong Chen, Quan Lu, Zemin Liu, Minghui Wu, Shenghua Ni
Demand estimation plays an important role in dynamic pricing where the optimal price can be obtained via maximizing the revenue based on the demand curve.
1 code implementation • 3 Jun 2022 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Jianling Sun
Most of the current ULTR methods are based on the examination hypothesis (EH), which assumes that the click probability can be factorized into two scalar functions, one related to ranking features and the other related to bias factors.
1 code implementation • 27 Oct 2021 • Zemin Liu, Yuan Fang, Chenghao Liu, Steven C. H. Hoi
Ideally, how a node receives its neighborhood information should be a function of its local context, to diverge from the global GNN model shared by all nodes.
no code implementations • 29 Sep 2021 • Xingtong Yu, Zemin Liu, Yuan Fang, Xinming Zhang
At the graph level, we modulate the graph representation conditioned on the query subgraph, so that the model can be adapted to each unique query for better matching with the input graph.
no code implementations • 14 May 2021 • Zhihao Wen, Yuan Fang, Zemin Liu
That is, MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
1 code implementation • 28 Nov 2017 • Jia Wang, Vincent W. Zheng, Zemin Liu, Kevin Chen-Chuan Chang
As a result, we introduce a new data model, namely diffusion topologies, to fully describe the cascade structure.