Search Results for author: Shuxian Bi

Found 6 papers, 3 papers with code

Leveraging Bidding Graphs for Advertiser-Aware Relevance Modeling in Sponsored Search

no code implementations Findings (EMNLP) 2021 Shuxian Bi, Chaozhuo Li, Xiao Han, Zheng Liu, Xing Xie, Haizhen Huang, Zengxuan Wen

As the fundamental basis of sponsored search, relevance modeling has attracted increasing attention due to the tremendous practical value.

Marketing

Proactive Recommendation in Social Networks: Steering User Interest via Neighbor Influence

no code implementations13 Sep 2024 Hang Pan, Shuxian Bi, Wenjie Wang, Haoxuan Li, Peng Wu, Fuli Feng, Xiangnan He

To answer this question, we resort to causal inference and formalize PRSN as: (1) estimating the potential feedback of a user on an item, under the network interference by the item's exposure to the user's neighbors; and (2) adjusting the exposure of a target item to target users' neighbors to trade off steering performance and the damage to the neighbors' experience.

Causal Inference

Proactive Recommendation with Iterative Preference Guidance

1 code implementation12 Mar 2024 Shuxian Bi, Wenjie Wang, Hang Pan, Fuli Feng, Xiangnan He

However, such recommender systems passively cater to user interests and even reinforce existing interests in the feedback loop, leading to problems like filter bubbles and opinion polarization.

Recommendation Systems

On the Equivalence of Decoupled Graph Convolution Network and Label Propagation

1 code implementation23 Oct 2020 Hande Dong, Jiawei Chen, Fuli Feng, Xiangnan He, Shuxian Bi, Zhaolin Ding, Peng Cui

The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning.

Node Classification Pseudo Label +1

Data Augmentation View on Graph Convolutional Network and the Proposal of Monte Carlo Graph Learning

1 code implementation23 Jun 2020 Hande Dong, Zhaolin Ding, Xiangnan He, Fuli Feng, Shuxian Bi

In this work, we introduce a new understanding for it -- data augmentation, which is more transparent than the previous understandings.

Data Augmentation Graph Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.