Search Results for author: Maofei Que

Found 3 papers, 3 papers with code

Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks

1 code implementation7 Jun 2023 Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, Chenliang Li, Qi Qian, Maofei Que, Ji Zhang, Xiao Zeng, Fei Huang

In addition, to facilitate a comprehensive evaluation of video-language models, we carefully build the largest human-annotated Chinese benchmarks covering three popular video-language tasks of cross-modal retrieval, video captioning, and video category classification.

Cross-Modal Retrieval Language Modelling +3

PURS: Personalized Unexpected Recommender System for Improving User Satisfaction

1 code implementation5 Jun 2021 Pan Li, Maofei Que, Zhichao Jiang, Yao Hu, Alexander Tuzhilin

Classical recommender system methods typically face the filter bubble problem when users only receive recommendations of their familiar items, making them bored and dissatisfied.

Recommendation Systems

Dual Attentive Sequential Learning for Cross-Domain Click-Through Rate Prediction

1 code implementation5 Jun 2021 Pan Li, Zhichao Jiang, Maofei Que, Yao Hu, Alexander Tuzhilin

While several cross domain sequential recommendation models have been proposed to leverage information from a source domain to improve CTR predictions in a target domain, they did not take into account bidirectional latent relations of user preferences across source-target domain pairs.

Click-Through Rate Prediction Sequential Recommendation

Cannot find the paper you are looking for? You can Submit a new open access paper.