Search Results for author: Xiangzheng Liu

Found 4 papers, 1 papers with code

FedUD: Exploiting Unaligned Data for Cross-Platform Federated Click-Through Rate Prediction

no code implementations26 Jul 2024 Wentao Ouyang, Rui Dong, Ri Tao, Xiangzheng Liu

In the second step, FedUD applies the learned knowledge to enrich the representations of the host party's unaligned data such that both aligned and unaligned data can contribute to federated model training.

Click-Through Rate Prediction Knowledge Distillation +3

Masked Multi-Domain Network: Multi-Type and Multi-Scenario Conversion Rate Prediction with a Single Model

no code implementations26 Mar 2024 Wentao Ouyang, Xiuwu Zhang, Chaofeng Guo, Shukui Ren, Yupei Sui, Kun Zhang, Jinmei Luo, Yunfeng Chen, Dongbo Xu, Xiangzheng Liu, Yanlong Du

A desired model for this problem should satisfy the following requirements: 1) Accuracy: the model should achieve fine-grained accuracy with respect to any conversion type in any display scenario.

Contrastive Learning for Conversion Rate Prediction

1 code implementation12 Jul 2023 Wentao Ouyang, Rui Dong, Xiuwu Zhang, Chaofeng Guo, Jinmei Luo, Xiangzheng Liu, Yanlong Du

To tailor the contrastive learning task to the CVR prediction problem, we propose embedding masking (EM), rather than feature masking, to create two views of augmented samples.

Contrastive Learning Prediction

The Short Text Matching Model Enhanced with Knowledge via Contrastive Learning

no code implementations8 Apr 2023 Ruiqiang Liu, Qiqiang Zhong, Mengmeng Cui, Hanjie Mai, Qiang Zhang, Shaohua Xu, Xiangzheng Liu, Yanlong Du

The model uses a generative model to generate corresponding complement sentences and uses the contrastive learning method to guide the model to obtain more semantically meaningful encoding of the original sentence.

Contrastive Learning Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.