Search Results for author: Jiancan Wu

Found 31 papers, 22 papers with code

RePO: ReLU-based Preference Optimization

1 code implementation10 Mar 2025 Junkang Wu, Kexin Huang, Xue Wang, Jinyang Gao, Bolin Ding, Jiancan Wu, Xiangnan He, Xiang Wang

Aligning large language models (LLMs) with human preferences is critical for real-world deployment, yet existing methods like RLHF face computational and stability challenges.

Addressing Delayed Feedback in Conversion Rate Prediction via Influence Functions

no code implementations1 Feb 2025 Chenlu Ding, Jiancan Wu, Yancheng Yuan, Junfeng Fang, Cunchun Li, Xiang Wang, Xiangnan He

In the realm of online digital advertising, conversion rate (CVR) prediction plays a pivotal role in maximizing revenue under cost-per-conversion (CPA) models, where advertisers are charged only when users complete specific actions, such as making a purchase.

Computational Efficiency

Position-aware Graph Transformer for Recommendation

no code implementations25 Dec 2024 Jiajia Chen, Jiancan Wu, Jiawei Chen, Chongming Gao, Yong Li, Xiang Wang

Collaborative recommendation fundamentally involves learning high-quality user and item representations from interaction data.

Collaborative Filtering Position

RosePO: Aligning LLM-based Recommenders with Human Values

no code implementations16 Oct 2024 Jiayi Liao, Xiangnan He, Ruobing Xie, Jiancan Wu, Yancheng Yuan, Xingwu Sun, Zhanhui Kang, Xiang Wang

Recently, there has been a growing interest in leveraging Large Language Models (LLMs) for recommendation systems, which usually adapt a pre-trained LLM to the recommendation scenario through supervised fine-tuning (SFT).

Hallucination Recommendation Systems

$α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs

1 code implementation14 Oct 2024 Junkang Wu, Xue Wang, Zhengyi Yang, Jiancan Wu, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He

Aligning large language models (LLMs) with human values and intentions is crucial for their utility, honesty, and safety.

Computational Efficiency

Text-guided Diffusion Model for 3D Molecule Generation

no code implementations4 Oct 2024 Yanchen Luo, Junfeng Fang, Sihang Li, Zhiyuan Liu, Jiancan Wu, An Zhang, Wenjie Du, Xiang Wang

The de novo generation of molecules with targeted properties is crucial in biology, chemistry, and drug discovery.

3D Molecule Generation Diversity +2

Customizing Language Models with Instance-wise LoRA for Sequential Recommendation

1 code implementation19 Aug 2024 Xiaoyu Kong, Jiancan Wu, An Zhang, Leheng Sheng, Hui Lin, Xiang Wang, Xiangnan He

Sequential recommendation systems predict the next interaction item based on users' past interactions, aligning recommendations with individual preferences.

Multi-Task Learning parameter-efficient fine-tuning +2

Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization

1 code implementation3 Aug 2024 Wenyu Mao, Jiancan Wu, Haoyang Liu, Yongduo Sui, Xiang Wang

In this work, we propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL), to extract the invariant features of graphs and enhance models' generalization ability to unseen distributions.

Contrastive Learning Data Augmentation +3

Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number

no code implementations29 Jul 2024 Chen-Lu Ding, Jiancan Wu, Wei Lin, Shiyang Shen, Xiang Wang, Yancheng Yuan

ASRC obtains the final clustering results by applying RCC to the learned feature representations with their consistent graph structure and edge weights.

Clustering Contrastive Learning +1

Reinforced Prompt Personalization for Recommendation with Large Language Models

1 code implementation24 Jul 2024 Wenyu Mao, Jiancan Wu, Weijian Chen, Chongming Gao, Xiang Wang, Xiangnan He

In this work, we introduce the concept of instance-wise prompting, aiming at personalizing discrete prompts for individual users.

Multi-agent Reinforcement Learning

$β$-DPO: Direct Preference Optimization with Dynamic $β$

1 code implementation11 Jul 2024 Junkang Wu, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He

Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences.

Informativeness

Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

1 code implementation10 Jul 2024 Junkang Wu, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jiawei Chen, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He

We categorize noise into pointwise noise, which includes low-quality data points, and pairwise noise, which encompasses erroneous data pair associations that affect preference rankings.

Let Me Do It For You: Towards LLM Empowered Recommendation via Tool Learning

no code implementations24 May 2024 Yuyue Zhao, Jiancan Wu, Xiang Wang, Wei Tang, Dingxian Wang, Maarten de Rijke

Through the integration of LLMs, ToolRec enables conventional recommender systems to become external tools with a natural language interface.

Attribute Recommendation Systems

Leave No Patient Behind: Enhancing Medication Recommendation for Rare Disease Patients

1 code implementation26 Mar 2024 Zihao Zhao, Yi Jing, Fuli Feng, Jiancan Wu, Chongming Gao, Xiangnan He

Medication recommendation systems have gained significant attention in healthcare as a means of providing tailored and effective drug combinations based on patients' clinical information.

Fairness Recommendation Systems

Dynamic Sparse Learning: A Novel Paradigm for Efficient Recommendation

no code implementations5 Feb 2024 Shuyao Wang, Yongduo Sui, Jiancan Wu, Zhi Zheng, Hui Xiong

In the realm of deep learning-based recommendation systems, the increasing computational demands, driven by the growing number of users and items, pose a significant challenge to practical deployment.

Model Compression Recommendation Systems +1

BSL: Understanding and Improving Softmax Loss for Recommendation

1 code implementation20 Dec 2023 Junkang Wu, Jiawei Chen, Jiancan Wu, Wentao Shi, Jizhi Zhang, Xiang Wang

Loss functions steer the optimization direction of recommendation models and are critical to model performance, but have received relatively little attention in recent recommendation research.

Fairness

LLaRA: Large Language-Recommendation Assistant

1 code implementation5 Dec 2023 Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, Xiangnan He

Treating the "sequential behaviors of users" as a distinct modality beyond texts, we employ a projector to align the traditional recommender's ID embeddings with the LLM's input space.

Language Modeling Language Modelling +2

Large Language Model Can Interpret Latent Space of Sequential Recommender

2 code implementations31 Oct 2023 Zhengyi Yang, Jiancan Wu, Yanchen Luo, Jizhi Zhang, Yancheng Yuan, An Zhang, Xiang Wang, Xiangnan He

Sequential recommendation is to predict the next item of interest for a user, based on her/his interaction history with previous items.

Language Modeling Language Modelling +2

Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion

1 code implementation NeurIPS 2023 Zhengyi Yang, Jiancan Wu, Zhicai Wang, Xiang Wang, Yancheng Yuan, Xiangnan He

Scrutinizing previous studies, we can summarize a common learning-to-classify paradigm -- given a positive item, a recommender model performs negative sampling to add negative items and learns to classify whether the user prefers them or not, based on his/her historical interaction sequence.

Denoising Sequential Recommendation

Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation

no code implementations25 Oct 2023 Chengpeng Li, Zhengyi Yang, Jizhi Zhang, Jiancan Wu, Dingxian Wang, Xiangnan He, Xiang Wang

Therefore, the data sparsity issue of reward signals and state transitions is very severe, while it has long been overlooked by existing RL recommenders. Worse still, RL methods learn through the trial-and-error mode, but negative feedback cannot be obtained in implicit feedback recommendation tasks, which aggravates the overestimation problem of offline RL recommender.

Contrastive Learning model +5

MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning

1 code implementation9 Oct 2023 Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, Chang Zhou

In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?

Ranked #59 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning Data Augmentation +3

Recommendation Unlearning via Influence Function

1 code implementation5 Jul 2023 Yang Zhang, Zhiyu Hu, Yimeng Bai, Jiancan Wu, Qifan Wang, Fuli Feng

In the light that recent recommender models use historical data for both the constructions of the optimization loss and the computational graph (e. g., neighborhood aggregation), IFRU jointly estimates the direct influence of unusable data on optimization loss and the spillover influence on the computational graph to pursue complete unlearning.

How Graph Convolutions Amplify Popularity Bias for Recommendation?

1 code implementation24 May 2023 Jiajia Chen, Jiancan Wu, Jiawei Chen, Xin Xin, Yong Li, Xiangnan He

Through theoretical analyses, we identify two fundamental factors: (1) with graph convolution (\textit{i. e.,} neighborhood aggregation), popular items exert larger influence than tail items on neighbor users, making the users move towards popular items in the representation space; (2) after multiple times of graph convolution, popular items would affect more high-order neighbors and become more influential.

Recommendation Systems

GIF: A General Graph Unlearning Strategy via Influence Function

1 code implementation6 Apr 2023 Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, Xiangnan He

Then, we recognize the crux to the inability of traditional influence function for graph unlearning, and devise Graph Influence Function (GIF), a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $\epsilon$-mass perturbation in deleted data.

Machine Unlearning

Adap-$τ$: Adaptively Modulating Embedding Magnitude for Recommendation

2 code implementations9 Feb 2023 Jiawei Chen, Junkang Wu, Jiancan Wu, Sheng Zhou, Xuezhi Cao, Xiangnan He

Recent years have witnessed the great successes of embedding-based methods in recommender systems.

Recommendation Systems

Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift

1 code implementation NeurIPS 2023 Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He

From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.

Data Augmentation Graph Classification +2

Cross Pairwise Ranking for Unbiased Item Recommendation

1 code implementation26 Apr 2022 Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, Ruiming Tang

In this work, we develop a new learning paradigm named Cross Pairwise Ranking (CPR) that achieves unbiased recommendation without knowing the exposure mechanism.

Recommendation Systems

Causal Attention for Interpretable and Generalizable Graph Classification

1 code implementation30 Dec 2021 Yongduo Sui, Xiang Wang, Jiancan Wu, Min Lin, Xiangnan He, Tat-Seng Chua

To endow the classifier with better interpretation and generalization, we propose the Causal Attention Learning (CAL) strategy, which discovers the causal patterns and mitigates the confounding effect of shortcuts.

Graph Attention Graph Classification +1

Self-supervised Graph Learning for Recommendation

3 code implementations21 Oct 2020 Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie

In this work, we explore self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation.

Collaborative Filtering Graph Learning +2

Graph Convolution Machine for Context-aware Recommender System

1 code implementation30 Jan 2020 Jiancan Wu, Xiangnan He, Xiang Wang, Qifan Wang, Weijian Chen, Jianxun Lian, Xing Xie

The encoder projects users, items, and contexts into embedding vectors, which are passed to the GC layers that refine user and item embeddings with context-aware graph convolutions on user-item graph.

Collaborative Filtering Decoder +1

Cannot find the paper you are looking for? You can Submit a new open access paper.