Search Results for author: Shaoguo Liu

Found 14 papers, 1 papers with code

AnchorGT: Efficient and Flexible Attention Architecture for Scalable Graph Transformers

no code implementations6 May 2024 Wenhao Zhu, Guojie Song, Liang Wang, Shaoguo Liu

Graph Transformers (GTs) have significantly advanced the field of graph representation learning by overcoming the limitations of message-passing graph neural networks (GNNs) and demonstrating promising performance and expressive power.

Graph Representation Learning

FedAds: A Benchmark for Privacy-Preserving CVR Estimation with Vertical Federated Learning

no code implementations15 May 2023 Penghui Wei, Hongjian Dou, Shaoguo Liu, Rongjun Tang, Li Liu, Liang Wang, Bo Zheng

We introduce FedAds, the first benchmark for CVR estimation with vFL, to facilitate standardized and systematical evaluations for vFL algorithms.

Privacy Preserving Vertical Federated Learning

Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking

no code implementations6 Feb 2023 Shanlei Mu, Penghui Wei, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng

In this paper, we propose a Hybrid Contrastive Constrained approach (HC^2) for multi-scenario ad ranking.

Contrastive Learning

RLTP: Reinforcement Learning to Pace for Delayed Impression Modeling in Preloaded Ads

no code implementations6 Feb 2023 Penghui Wei, Yongqiang Chen, Shaoguo Liu, Liang Wang, Bo Zheng

In a whole delivery period, advertisers usually desire a certain impression count for the ads, and they also expect that the delivery performance is as good as possible (e. g., obtaining high click-through rate).

reinforcement-learning Reinforcement Learning (RL)

Global Balanced Experts for Federated Long-Tailed Learning

1 code implementation ICCV 2023 Yaopei Zeng, Lei Liu, Li Liu, Li Shen, Shaoguo Liu, Baoyuan Wu

In particular, a proxy is derived from the accumulated gradients uploaded by the clients after local training, and is shared by all clients as the class prior for re-balance training.

Federated Learning Privacy Preserving

Correlative Preference Transfer with Hierarchical Hypergraph Network for Multi-Domain Recommendation

no code implementations21 Nov 2022 Zixuan Xu, Penghui Wei, Shaoguo Liu, Weimin Zhang, Liang Wang, Bo Zheng

Conventional graph neural network based methods usually deal with each domain separately, or train a shared model to serve all domains.

Graph Neural Network Marketing +1

Modeling Adaptive Fine-grained Task Relatedness for Joint CTR-CVR Estimation

no code implementations29 Aug 2022 Zihan Lin, Xuanhua Yang, Xiaoyu Peng, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng

For this purpose, we build a relatedness prediction network, so that it can predict the contrast strength for inter-task representations of an instance.

Contrastive Learning Multi-Task Learning +2

Towards Personalized Bundle Creative Generation with Contrastive Non-Autoregressive Decoding

no code implementations30 May 2022 Penghui Wei, Shaoguo Liu, Xuanhua Yang, Liang Wang, Bo Zheng

Current bundle generation studies focus on generating a combination of items to improve user experience.

CREATER: CTR-driven Advertising Text Generation with Controlled Pre-Training and Contrastive Fine-Tuning

no code implementations NAACL (ACL) 2022 Penghui Wei, Xuanhua Yang, Shaoguo Liu, Liang Wang, Bo Zheng

This paper focuses on automatically generating the text of an ad, and the goal is that the generated text can capture user interest for achieving higher click-through rate (CTR).

Contrastive Learning Text Generation

UKD: Debiasing Conversion Rate Estimation via Uncertainty-regularized Knowledge Distillation

no code implementations20 Jan 2022 Zixuan Xu, Penghui Wei, Weimin Zhang, Shaoguo Liu, Liang Wang, Bo Zheng

Then a student model is trained on both clicked and unclicked ads with knowledge distillation, performing uncertainty modeling to alleviate the inherent noise in pseudo-labels.

Knowledge Distillation Selection bias

Cannot find the paper you are looking for? You can Submit a new open access paper.