Search Results for author: Yinglong Xia

Found 17 papers, 7 papers with code

TASER: Temporal Adaptive Sampling for Fast and Accurate Dynamic Graph Representation Learning

1 code implementation8 Feb 2024 Gangda Deng, Hongkuan Zhou, Hanqing Zeng, Yinglong Xia, Christopher Leung, Jianbo Li, Rajgopal Kannan, Viktor Prasanna

Recently, Temporal Graph Neural Networks (TGNNs) have demonstrated state-of-the-art performance in various high-impact applications, including fraud detection and content recommendation.

Denoising Fraud Detection +1

Mixture of Weak & Strong Experts on Graphs

no code implementations9 Nov 2023 Hanqing Zeng, Hanjia Lyu, Diyi Hu, Yinglong Xia, Jiebo Luo

We propose to decouple the two modalities by mixture of weak and strong experts (Mowst), where the weak expert is a light-weight Multi-layer Perceptron (MLP), and the strong expert is an off-the-shelf Graph Neural Network (GNN).

Node Classification

Deceptive Fairness Attacks on Graphs via Meta Learning

1 code implementation24 Oct 2023 Jian Kang, Yinglong Xia, Ross Maciejewski, Jiebo Luo, Hanghang Tong

We study deceptive fairness attacks on graphs to answer the following question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?

Adversarial Robustness Fairness +3

Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning in Large Language Models

no code implementations7 Oct 2023 Song Jiang, Zahra Shakeri, Aaron Chan, Maziar Sanjabi, Hamed Firooz, Yinglong Xia, Bugra Akyildiz, Yizhou Sun, Jinchao Li, Qifan Wang, Asli Celikyilmaz

Breakdown analysis further highlights RESPROMPT particularly excels in complex multi-step reasoning: for questions demanding at least five reasoning steps, RESPROMPT outperforms the best CoT based benchmarks by a remarkable average improvement of 21. 1% on LLaMA-65B and 14. 3% on LLaMA2-70B.

Math

Hierarchical Multi-Marginal Optimal Transport for Network Alignment

no code implementations6 Oct 2023 Zhichen Zeng, Boxin Du, Si Zhang, Yinglong Xia, Zhining Liu, Hanghang Tong

To depict high-order relationships across multiple networks, the FGW distance is generalized to the multi-marginal setting, based on which networks can be aligned jointly.

User-Controllable Recommendation via Counterfactual Retrospective and Prospective Explanations

1 code implementation2 Aug 2023 Juntao Tan, Yingqiang Ge, Yan Zhu, Yinglong Xia, Jiebo Luo, Jianchao Ji, Yongfeng Zhang

Acknowledging the recent advancements in explainable recommender systems that enhance users' understanding of recommendation mechanisms, we propose leveraging these advancements to improve user controllability.

counterfactual Counterfactual Reasoning +1

LLM-Rec: Personalized Recommendation via Prompting Large Language Models

no code implementations24 Jul 2023 Hanjia Lyu, Song Jiang, Hanqing Zeng, Yinglong Xia, Qifan Wang, Si Zhang, Ren Chen, Christopher Leung, Jiajie Tang, Jiebo Luo

Notably, the success of LLM-Rec lies in its prompting strategies, which effectively tap into the language model's comprehension of both general and specific item characteristics.

Explainable Fairness in Recommendation

no code implementations24 Apr 2022 Yingqiang Ge, Juntao Tan, Yan Zhu, Yinglong Xia, Jiebo Luo, Shuchang Liu, Zuohui Fu, Shijie Geng, Zelong Li, Yongfeng Zhang

In this paper, we study the problem of explainable fairness, which helps to gain insights about why a system is fair or unfair, and guides the design of fair recommender systems with a more informed and unified methodology.

counterfactual Fairness +1

RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network

no code implementations28 Feb 2022 Jian Kang, Yan Zhu, Yinglong Xia, Jiebo Luo, Hanghang Tong

Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.

Decoupling the Depth and Scope of Graph Neural Networks

1 code implementation NeurIPS 2021 Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen

We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i. e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph.

Link Prediction Node Classification +1

Deep Graph Neural Networks with Shallow Subgraph Samplers

2 code implementations2 Dec 2020 Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen

We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph.

Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning

2 code implementations NeurIPS 2021 Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin

In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node, such as link).

General Classification Graph Classification +4

From Node Embedding to Graph Embedding: Scalable Global Graph Kernel via Random Features

no code implementations NIPS 2018 2018 Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock

Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.

Graph Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.