Search Results for author: Hanqing Zeng

Found 12 papers, 8 papers with code

TASER: Temporal Adaptive Sampling for Fast and Accurate Dynamic Graph Representation Learning

1 code implementation8 Feb 2024 Gangda Deng, Hongkuan Zhou, Hanqing Zeng, Yinglong Xia, Christopher Leung, Jianbo Li, Rajgopal Kannan, Viktor Prasanna

Recently, Temporal Graph Neural Networks (TGNNs) have demonstrated state-of-the-art performance in various high-impact applications, including fraud detection and content recommendation.

Denoising Fraud Detection +1

Mixture of Weak & Strong Experts on Graphs

no code implementations9 Nov 2023 Hanqing Zeng, Hanjia Lyu, Diyi Hu, Yinglong Xia, Jiebo Luo

We propose to decouple the two modalities by mixture of weak and strong experts (Mowst), where the weak expert is a light-weight Multi-layer Perceptron (MLP), and the strong expert is an off-the-shelf Graph Neural Network (GNN).

Node Classification

On the Equivalence of Graph Convolution and Mixup

no code implementations29 Sep 2023 Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu

We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.

Data Augmentation

LLM-Rec: Personalized Recommendation via Prompting Large Language Models

no code implementations24 Jul 2023 Hanjia Lyu, Song Jiang, Hanqing Zeng, Yinglong Xia, Qifan Wang, Si Zhang, Ren Chen, Christopher Leung, Jiajie Tang, Jiebo Luo

Notably, the success of LLM-Rec lies in its prompting strategies, which effectively tap into the language model's comprehension of both general and specific item characteristics.

Decoupling the Depth and Scope of Graph Neural Networks

1 code implementation NeurIPS 2021 Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen

We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i. e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph.

Link Prediction Node Classification +1

Accelerating Large Scale Real-Time GNN Inference using Channel Pruning

1 code implementation10 May 2021 Hongkuan Zhou, Ajitesh Srivastava, Hanqing Zeng, Rajgopal Kannan, Viktor Prasanna

In this paper, we propose to accelerate GNN inference by pruning the dimensions in each layer with negligible accuracy loss.

Node Classification Spam detection

Deep Graph Neural Networks with Shallow Subgraph Samplers

2 code implementations2 Dec 2020 Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen

We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph.

Accurate, Efficient and Scalable Training of Graph Neural Networks

2 code implementations5 Oct 2020 Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna

For feature propagation within subgraphs, we improve cache utilization and reduce DRAM traffic by data partitioning.

Graph Sampling

GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms

1 code implementation31 Dec 2019 Hanqing Zeng, Viktor Prasanna

We first analyze the computation and communication characteristics of various GCN training algorithms, and select a subgraph-based algorithm that is well suited for hardware execution.

Representation Learning

SPEC2: SPECtral SParsE CNN Accelerator on FPGAs

no code implementations16 Oct 2019 Yue Niu, Hanqing Zeng, Ajitesh Srivastava, Kartik Lakhotia, Rajgopal Kannan, Yanzhi Wang, Viktor Prasanna

On the other hand, weight pruning techniques address the redundancy in model parameters by converting dense convolutional kernels into sparse ones.

Accurate, Efficient and Scalable Graph Embedding

2 code implementations28 Oct 2018 Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna

However, a major challenge is to reduce the complexity of layered GCNs and make them parallelizable and scalable on very large graphs -- state-of the art techniques are unable to achieve scalability without losing accuracy and efficiency.

Clustering Graph Embedding +2

Cannot find the paper you are looking for? You can Submit a new open access paper.