Search Results for author: Jiancan Wu

Found 18 papers, 13 papers with code

Self-supervised Graph Learning for Recommendation

2 code implementations21 Oct 2020 Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie

In this work, we explore self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation.

Graph Learning Representation Learning +1

Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization

1 code implementation9 Oct 2023 Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, Chang Zhou

In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?

Ranked #53 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning Data Augmentation +3

Large Language Model Can Interpret Latent Space of Sequential Recommender

2 code implementations31 Oct 2023 Zhengyi Yang, Jiancan Wu, Yanchen Luo, Jizhi Zhang, Yancheng Yuan, An Zhang, Xiang Wang, Xiangnan He

Sequential recommendation is to predict the next item of interest for a user, based on her/his interaction history with previous items.

Language Modelling Large Language Model +1

Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion

1 code implementation NeurIPS 2023 Zhengyi Yang, Jiancan Wu, Zhicai Wang, Xiang Wang, Yancheng Yuan, Xiangnan He

Scrutinizing previous studies, we can summarize a common learning-to-classify paradigm -- given a positive item, a recommender model performs negative sampling to add negative items and learns to classify whether the user prefers them or not, based on his/her historical interaction sequence.

Denoising Sequential Recommendation

Causal Attention for Interpretable and Generalizable Graph Classification

1 code implementation30 Dec 2021 Yongduo Sui, Xiang Wang, Jiancan Wu, Min Lin, Xiangnan He, Tat-Seng Chua

To endow the classifier with better interpretation and generalization, we propose the Causal Attention Learning (CAL) strategy, which discovers the causal patterns and mitigates the confounding effect of shortcuts.

Graph Attention Graph Classification

Cross Pairwise Ranking for Unbiased Item Recommendation

1 code implementation26 Apr 2022 Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, Ruiming Tang

In this work, we develop a new learning paradigm named Cross Pairwise Ranking (CPR) that achieves unbiased recommendation without knowing the exposure mechanism.

Recommendation Systems

Adap-$τ$: Adaptively Modulating Embedding Magnitude for Recommendation

2 code implementations9 Feb 2023 Jiawei Chen, Junkang Wu, Jiancan Wu, Sheng Zhou, Xuezhi Cao, Xiangnan He

Recent years have witnessed the great successes of embedding-based methods in recommender systems.

Recommendation Systems

LLaRA: Aligning Large Language Models with Sequential Recommenders

1 code implementation5 Dec 2023 Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang

To harness the complementary strengths of traditional recommenders (which encode user behavioral knowledge) and LLMs (which possess world knowledge about items), we propose LLaRA -- a Large Language and Recommendation Assistant framework.

Language Modelling Sequential Recommendation +1

GIF: A General Graph Unlearning Strategy via Influence Function

1 code implementation6 Apr 2023 Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, Xiangnan He

Then, we recognize the crux to the inability of traditional influence function for graph unlearning, and devise Graph Influence Function (GIF), a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $\epsilon$-mass perturbation in deleted data.

Machine Unlearning

Graph Convolution Machine for Context-aware Recommender System

1 code implementation30 Jan 2020 Jiancan Wu, Xiangnan He, Xiang Wang, Qifan Wang, Weijian Chen, Jianxun Lian, Xing Xie

The encoder projects users, items, and contexts into embedding vectors, which are passed to the GC layers that refine user and item embeddings with context-aware graph convolutions on user-item graph.

Collaborative Filtering Recommendation Systems

Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift

1 code implementation NeurIPS 2023 Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He

From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.

Data Augmentation Graph Classification +2

BSL: Understanding and Improving Softmax Loss for Recommendation

1 code implementation20 Dec 2023 Junkang Wu, Jiawei Chen, Jiancan Wu, Wentao Shi, Jizhi Zhang, Xiang Wang

Loss functions steer the optimization direction of recommendation models and are critical to model performance, but have received relatively little attention in recent recommendation research.

Fairness

How Graph Convolutions Amplify Popularity Bias for Recommendation?

1 code implementation24 May 2023 Jiajia Chen, Jiancan Wu, Jiawei Chen, Xin Xin, Yong Li, Xiangnan He

Through theoretical analyses, we identify two fundamental factors: (1) with graph convolution (\textit{i. e.,} neighborhood aggregation), popular items exert larger influence than tail items on neighbor users, making the users move towards popular items in the representation space; (2) after multiple times of graph convolution, popular items would affect more high-order neighbors and become more influential.

Recommendation Systems

Recommendation Unlearning via Influence Function

no code implementations5 Jul 2023 Yang Zhang, Zhiyu Hu, Yimeng Bai, Fuli Feng, Jiancan Wu, Qifan Wang, Xiangnan He

In this work, we propose an Influence Function-based Recommendation Unlearning (IFRU) framework, which efficiently updates the model without retraining by estimating the influence of the unusable data on the model via the influence function.

Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation

no code implementations25 Oct 2023 Chengpeng Li, Zhengyi Yang, Jizhi Zhang, Jiancan Wu, Dingxian Wang, Xiangnan He, Xiang Wang

Therefore, the data sparsity issue of reward signals and state transitions is very severe, while it has long been overlooked by existing RL recommenders. Worse still, RL methods learn through the trial-and-error mode, but negative feedback cannot be obtained in implicit feedback recommendation tasks, which aggravates the overestimation problem of offline RL recommender.

Contrastive Learning Offline RL +3

Dynamic Sparse Learning: A Novel Paradigm for Efficient Recommendation

no code implementations5 Feb 2024 Shuyao Wang, Yongduo Sui, Jiancan Wu, Zhi Zheng, Hui Xiong

In the realm of deep learning-based recommendation systems, the increasing computational demands, driven by the growing number of users and items, pose a significant challenge to practical deployment.

Model Compression Recommendation Systems +1

Leave No Patient Behind: Enhancing Medication Recommendation for Rare Disease Patients

no code implementations26 Mar 2024 Zihao Zhao, Yi Jing, Fuli Feng, Jiancan Wu, Chongming Gao, Xiangnan He

Medication recommendation systems have gained significant attention in healthcare as a means of providing tailored and effective drug combinations based on patients' clinical information.

Fairness Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.