no code implementations • 15 Sep 2024 • Qingyao Li, Wei Xia, Kounianhua Du, Xinyi Dai, Ruiming Tang, Yasheng Wang, Yong Yu, Weinan Zhang
More importantly, we construct verbal feedback from fine-grained code execution feedback to refine erroneous thoughts during the search.
no code implementations • 2 Sep 2024 • Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen
Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.
no code implementations • 20 Aug 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Muyan Weng, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Yong Yu, Weinan Zhang
Recommender systems (RSs) play a pervasive role in today's online services, yet their closed-loop nature constrains their access to open-world knowledge.
1 code implementation • 19 Aug 2024 • Chuhan Wu, Ruiming Tang
Based on only a few key hyperparameters of the LLM architecture and the size of training data, we obtain a quite accurate MMLU prediction of various LLMs with diverse sizes and architectures developed by different organizations in different years.
no code implementations • 15 Aug 2024 • Yang Yang, Bo Chen, Chenxu Zhu, Menghui Zhu, Xinyi Dai, Huifeng Guo, Muyu Zhang, Zhenhua Dong, Ruiming Tang
Click-Through Rate (CTR) prediction is a fundamental technique for online advertising recommendation and the complex online competitive auction process also brings many difficulties to CTR optimization.
no code implementations • 14 Aug 2024 • Yuxin Jiang, Bo Huang, YuFei Wang, Xingshan Zeng, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Wei Wang
Direct preference optimization (DPO), a widely adopted offline preference optimization algorithm, aims to align large language models (LLMs) with human-desired behaviors using pairwise preference data.
no code implementations • 13 Aug 2024 • Yusheng Lu, Zhaocheng Du, Xiangyang Li, Xiangyu Zhao, Weiwen Liu, Yichao Wang, Huifeng Guo, Ruiming Tang, Zhenhua Dong, Yongrui Duan
And employs expectation maximization to infer the embedded latent profile, minimizing textual noise by fixing the prompt template.
1 code implementation • 11 Aug 2024 • Yunjia Xi, Hangyu Wang, Bo Chen, Jianghao Lin, Menghui Zhu, Weiwen Liu, Ruiming Tang, Weinan Zhang, Yong Yu
This generation inefficiency stems from the autoregressive nature of LLMs, and a promising direction for acceleration is speculative decoding, a Draft-then-Verify paradigm that increases the number of generated tokens per decoding step.
no code implementations • 7 Aug 2024 • Jiachen Zhu, Jianghao Lin, Xinyi Dai, Bo Chen, Rong Shan, Jieming Zhu, Ruiming Tang, Yong Yu, Weinan Zhang
Thus, LLMs only see a small fraction of the datasets (e. g., less than 10%) instead of the whole datasets, limiting their exposure to the full training space.
no code implementations • 5 Aug 2024 • Shiwei Li, Huifeng Guo, Xing Tang, Ruiming Tang, Lu Hou, Ruixuan Li, Rui Zhang
In this survey, we provide a comprehensive review of embedding compression approaches in recommender systems.
no code implementations • 14 Jul 2024 • Bo Chen, Xinyi Dai, Huifeng Guo, Wei Guo, Weiwen Liu, Yong liu, Jiarui Qin, Ruiming Tang, Yichao Wang, Chuhan Wu, Yaxiong Wu, Hao Zhang
Recommender systems (RS) are vital for managing information overload and delivering personalized content, responding to users' diverse information needs.
1 code implementation • 9 Jul 2024 • Mingjia Yin, Chuhan Wu, YuFei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively.
1 code implementation • 6 Jul 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
The preferences embedded in the user's historical dialogue sessions and the current session exhibit continuity and sequentiality, and we refer to CRSs with this characteristic as sequential CRSs.
1 code implementation • 3 Jul 2024 • Xiangyang Li, Kuicai Dong, Yi Quan Lee, Wei Xia, Yichun Yin, Hao Zhang, Yong liu, Yasheng Wang, Ruiming Tang
Despite the substantial success of Information Retrieval (IR) in various NLP tasks, most IR systems predominantly handle queries and corpora in natural language, neglecting the domain of code retrieval.
Ranked #1 on Code Search on CoIR
no code implementations • 1 Jul 2024 • Lingyue Fu, Hao Guan, Kounianhua Du, Jianghao Lin, Wei Xia, Weinan Zhang, Ruiming Tang, Yasheng Wang, Yong Yu
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question, which is a crucial task in intelligent tutoring systems (ITS).
no code implementations • 27 Jun 2024 • Jizheng Chen, Kounianhua Du, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang
Concretely, we propose to inject the preference understanding capability into LLM via a GAT expert model where the user preference is better encoded by parallelly propagating the temporal relations, and rating signals as well as various side information of historical items.
no code implementations • 18 Jun 2024 • Jingtong Gao, Bo Chen, Xiangyu Zhao, Weiwen Liu, Xiangyang Li, Yichao Wang, Zijian Zhang, Wanyu Wang, Yuyang Ye, Shanru Lin, Huifeng Guo, Ruiming Tang
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms.
no code implementations • 18 Jun 2024 • Yuhao Wang, Yichao Wang, Zichuan Fu, Xiangyang Li, Xiangyu Zhao, Huifeng Guo, Ruiming Tang
As the demand for more personalized recommendation grows and a dramatic boom in commercial scenarios arises, the study on multi-scenario recommendation (MSR) has attracted much attention, which uses the data from all scenarios to simultaneously improve their recommendation performance.
no code implementations • 4 Jun 2024 • Jianghao Lin, Xinyi Dai, Rong Shan, Bo Chen, Ruiming Tang, Yong Yu, Weinan Zhang
Hence, we propose and verify our core viewpoint: Large Language Models Make Sample-Efficient Recommender Systems.
no code implementations • 29 May 2024 • Hao Zhang, Yuyang Zhang, Xiaoguang Li, Wenxuan Shi, Haonan Xu, Huanshuo Liu, Yasheng Wang, Lifeng Shang, Qun Liu, Yong liu, Ruiming Tang
Integrating external knowledge into large language models (LLMs) presents a promising solution to overcome the limitations imposed by their antiquated and static parametric memory.
no code implementations • 21 May 2024 • Qingyao Li, Wei Xia, Kounianhua Du, Qiji Zhang, Weinan Zhang, Ruiming Tang, Yong Yu
However, integrating LLMs into concept recommendation presents two urgent challenges: 1) How to construct text for concepts that effectively incorporate the human knowledge system?
no code implementations • 21 May 2024 • Yuang Zhao, Zhaocheng Du, Qinglin Jia, Linxuan Zhang, Zhenhua Dong, Ruiming Tang
With the increase in the business scale and number of domains in online advertising, multi-domain ad recommendation has become a mainstream solution in the industry.
no code implementations • 20 May 2024 • Kounianhua Du, Jizheng Chen, Jianghao Lin, Menghui Zhu, Bo Chen, Shuai Li, Ruiming Tang
In this paper, we propose two constraints to extract Essential and Disentangled Knowledge from past data for rational and generalized recommendation enhancement, which improves the capabilities of the parametric knowledge base without increasing the size of it.
1 code implementation • 20 May 2024 • Kounianhua Du, Jizheng Chen, Jianghao Lin, Yunjia Xi, Hangyu Wang, Xinyi Dai, Bo Chen, Ruiming Tang, Weinan Zhang
In this paper, we propose DisCo to Disentangle the unique patterns from the two representation spaces and Collaborate the two spaces for recommendation enhancement, where both the specificity and the consistency of the two spaces are captured.
no code implementations • 17 May 2024 • Xingmei Wang, Weiwen Liu, Xiaolong Chen, Qi Liu, Xu Huang, Defu Lian, Xiangyang Li, Yasheng Wang, Zhenhua Dong, Ruiming Tang
This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency.
no code implementations • 3 May 2024 • Kounianhua Du, Renting Rui, Huacan Chai, Lingyue Fu, Wei Xia, Yasheng Wang, Ruiming Tang, Yong Yu, Weinan Zhang
Despite the intelligence shown by the general large language models, their specificity in code generation can still be improved due to the syntactic gap and mismatched vocabulary existing among natural language and different programming languages.
no code implementations • 28 Apr 2024 • Huanshuo Liu, Bo Chen, Menghui Zhu, Jianghao Lin, Jiarui Qin, Yang Yang, Hao Zhang, Ruiming Tang
Specifically, a knowledge base, consisting of a retrieval-oriented embedding layer and a knowledge encoder, is designed to preserve and imitate the retrieved & aggregated representations in a decomposition-reconstruction paradigm.
no code implementations • 15 Apr 2024 • JunJie Huang, Guohao Cai, Jieming Zhu, Zhenhua Dong, Ruiming Tang, Weinan Zhang, Yong Yu
RAR consists of two key sub-modules, which synergistically gather information from a vast pool of look-alike users and recall items, resulting in enriched user representations.
no code implementations • 11 Apr 2024 • Jiachen Zhu, Yichao Wang, Jianghao Lin, Jiarui Qin, Ruiming Tang, Weinan Zhang, Yong Yu
Furthermore, through causal graph analysis, we have discovered that the scenario itself directly influences click behavior, yet existing approaches directly incorporate data from other scenarios during the training of the current scenario, leading to prediction biases when they directly utilize click behaviors from other scenarios to train models.
no code implementations • 11 Apr 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
Concretely, WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
no code implementations • 31 Mar 2024 • Wenlin Zhang, Chuhan Wu, Xiangyang Li, Yuhao Wang, Kuicai Dong, Yichao Wang, Xinyi Dai, Xiangyu Zhao, Huifeng Guo, Ruiming Tang
Recommender systems aim to predict user interest based on historical behavioral data.
no code implementations • 25 Mar 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Chuhan Wu, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
The rise of large language models (LLMs) has opened new opportunities in Recommender Systems (RSs) by enhancing user behavior modeling and content understanding.
2 code implementations • 19 Mar 2024 • Pengyue Jia, Yejing Wang, Zhaocheng Du, Xiangyu Zhao, Yichao Wang, Bo Chen, Wanyu Wang, Huifeng Guo, Ruiming Tang
Secondly, the existing literature's lack of detailed analysis on selection attributes, based on large-scale datasets and a thorough comparison among selection techniques and DRS backbones, restricts the generalizability of findings and impedes deployment on DRS.
1 code implementation • 6 Mar 2024 • Hangyu Wang, Jianghao Lin, Bo Chen, Yang Yang, Ruiming Tang, Weinan Zhang, Yong Yu
However, in order to protect user privacy and optimize utility, it is also crucial for LLMRec to intentionally forget specific user data, which is generally referred to as recommendation unlearning.
1 code implementation • 19 Feb 2024 • Yuxin Jiang, YuFei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang
Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention.
no code implementations • 15 Feb 2024 • Dexun Li, Cong Zhang, Kuicai Dong, Derrick Goh Xin Deik, Ruiming Tang, Yong liu
We propose the Distributional Preference Reward Model (DPRM), a simple yet effective framework to align large language models with diverse human preferences.
no code implementations • 5 Feb 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.
no code implementations • 21 Jan 2024 • Jiarui Qin, Weiwen Liu, Ruiming Tang, Weinan Zhang, Yong Yu
A personalized knowledge adaptation unit is devised to effectively exploit the information from the knowledge base by adapting the retrieved knowledge to the target samples.
no code implementations • 27 Dec 2023 • Qingyao Li, Lingyue Fu, Weiming Zhang, Xianyu Chen, Jingwei Yu, Wei Xia, Weinan Zhang, Ruiming Tang, Yong Yu
Solving the problems encountered by students poses a significant challenge for traditional deep learning models, as it requires not only a broad spectrum of subject knowledge but also the ability to understand what constitutes a student's individual difficulties.
1 code implementation • 17 Dec 2023 • Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, Ruiming Tang
Click-Through Rate (CTR) prediction is a crucial task in online recommendation platforms as it involves estimating the probability of user engagement with advertisements or items by clicking on them.
1 code implementation • 30 Nov 2023 • Liangcai Su, Fan Yan, Jieming Zhu, Xi Xiao, Haoyi Duan, Zhou Zhao, Zhenhua Dong, Ruiming Tang
Two-tower models are a prevalent matching framework for recommendation, which have been widely deployed in industrial applications.
1 code implementation • 6 Nov 2023 • Mingjia Yin, Hao Wang, Xiang Xu, Likang Wu, Sirui Zhao, Wei Guo, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
To this end, we propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR), that incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
no code implementations • 6 Nov 2023 • Fuyuan Lyu, Yaochen Hu, Xing Tang, Yingxue Zhang, Ruiming Tang, Xue Liu
Hence, we propose a hypothesis that the negative sampler should align with the capacity of the recommendation models as well as the statistics of the datasets to achieve optimal performance.
1 code implementation • 30 Oct 2023 • Hangyu Wang, Jianghao Lin, Xiangyang Li, Bo Chen, Chenxu Zhu, Ruiming Tang, Weinan Zhang, Yong Yu
In this paper, we propose to conduct Fine-grained feature-level ALignment between ID-based Models and Pretrained Language Models (FLIP) for CTR prediction.
no code implementations • 13 Oct 2023 • Jianghao Lin, Bo Chen, Hangyu Wang, Yunjia Xi, Yanru Qu, Xinyi Dai, Kangning Zhang, Ruiming Tang, Yong Yu, Weinan Zhang
Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features.
1 code implementation • 11 Oct 2023 • Hangyu Wang, Ting Long, Liang Yin, Weinan Zhang, Wei Xia, Qichen Hong, Dingyin Xia, Ruiming Tang, Yong Yu
Besides, the students' response records contain valuable relational information between questions and knowledge concepts.
no code implementations • 7 Oct 2023 • Zhenhua Dong, Jieming Zhu, Weiwen Liu, Ruiming Tang
Huawei's vision and mission is to build a fully connected intelligent world.
2 code implementations • 22 Sep 2023 • Qidong Liu, Fan Yan, Xiangyu Zhao, Zhaocheng Du, Huifeng Guo, Ruiming Tang, Feng Tian
However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems.
no code implementations • DLP@RecSys 2023 • Qi Zhang, Chuhan Wu, Jieming Zhu, Jingjie Li, Qinglin Jia, Ruiming Tang, Rui Zhang, Liangbi Li
We then select them in a domain-aware way to promote informative features for different domains.
2 code implementations • 12 Sep 2023 • Xiaopeng Li, Fan Yan, Xiangyu Zhao, Yichao Wang, Bo Chen, Huifeng Guo, Ruiming Tang
Secondly, due to the distribution differences among domains, the utilization of static parameters in existing methods limits their flexibility to adapt to diverse domains.
no code implementations • 5 Sep 2023 • Jingtong Gao, Bo Chen, Menghui Zhu, Xiangyu Zhao, Xiaopeng Li, Yuhao Wang, Yichao Wang, Huifeng Guo, Ruiming Tang
To address these limitations, we propose a Scenario-Aware Hierarchical Dynamic Network for Multi-Scenario Recommendations (HierRec), which perceives implicit patterns adaptively and conducts explicit and implicit scenario modeling jointly.
1 code implementation • 22 Aug 2023 • Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, Weinan Zhang
With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, LLM-enhanced recommender systems have received much attention and have been actively explored currently.
no code implementations • 19 Aug 2023 • Hengyu Zhang, Chang Meng, Wei Guo, Huifeng Guo, Jieming Zhu, Guangpeng Zhao, Ruiming Tang, Xiu Li
Click-Through Rate (CTR) prediction, crucial in applications like recommender systems and online advertising, involves ranking items based on the likelihood of user clicks.
no code implementations • 15 Aug 2023 • Bowei He, Xu He, Renrui Zhang, Yingxue Zhang, Ruiming Tang, Chen Ma
The high-throughput data requires the model to be updated in a timely manner for capturing the user interest dynamics, which leads to the emergence of streaming recommender systems.
no code implementations • 14 Aug 2023 • Ziru Liu, Kecheng Chen, Fengyi Song, Bo Chen, Xiangyu Zhao, Huifeng Guo, Ruiming Tang
In the domain of streaming recommender systems, conventional methods for addressing new user IDs or item IDs typically involve assigning initial ID embeddings randomly.
1 code implementation • 3 Aug 2023 • Jianghao Lin, Yanru Qu, Wei Guo, Xinyi Dai, Ruiming Tang, Yong Yu, Weinan Zhang
The large capacity of neural models helps digest such massive amounts of data under the supervised learning paradigm, yet they fail to utilize the substantial data to its full potential, since the 1-bit click signal is not sufficient to guide the model to learn capable representations of features and instances.
no code implementations • 26 Jun 2023 • Chuhan Wu, Jingjie Li, Qinglin Jia, Hong Zhu, Yuan Fang, Ruiming Tang
Accurate customer lifetime value (LTV) prediction can help service providers optimize their marketing policies in customer-centric applications.
1 code implementation • 19 Jun 2023 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items.
2 code implementations • 15 Jun 2023 • Jieming Zhu, Guohao Cai, JunJie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang
The error memory module is designed with fast access capabilities and undergoes continual refreshing with newly observed data samples during the model serving phase to support fast model adaptation.
1 code implementation • 9 Jun 2023 • Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
no code implementations • 7 Jun 2023 • Xianyu Chen, Jian Shen, Wei Xia, Jiarui Jin, Yakun Song, Weinan Zhang, Weiwen Liu, Menghui Zhu, Ruiming Tang, Kai Dong, Dingyin Xia, Yong Yu
Noticing that existing approaches fail to consider the correlations of concepts in the path, we propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC), which formulates the recommendation task under a set-to-sequence paradigm.
no code implementations • 5 Jun 2023 • Xiangyang Li, Bo Chen, Lu Hou, Ruiming Tang
Both tabular data and converted textual data are regarded as two different modalities and are separately fed into the collaborative CTR model and pre-trained language model.
no code implementations • 2 May 2023 • Yuening Wang, Yingxue Zhang, Antonios Valkanas, Ruiming Tang, Chen Ma, Jianye Hao, Mark Coates
In contrast, for users who have static preferences, model performance can benefit greatly from preserving as much of the user's long-term preferences as possible.
1 code implementation • 21 Mar 2023 • Bowei He, Xu He, Yingxue Zhang, Ruiming Tang, Chen Ma
Personalized recommender systems have been widely studied and deployed to reduce information overload and satisfy users' diverse needs.
2 code implementations • 4 Mar 2023 • Wei Guo, Chang Meng, Enming Yuan, ZhiCheng He, Huifeng Guo, Yingxue Zhang, Bo Chen, Yaochen Hu, Ruiming Tang, Xiu Li, Rui Zhang
However, it is challenging to explore multi-behavior data due to the unbalanced data distribution and sparse target behavior, which lead to the inadequate modeling of high-order relations when treating multi-behavior data ''as features'' and gradient conflict in multitask learning when treating multi-behavior data ''as labels''.
no code implementations • 1 Mar 2023 • Xu Chen, Jingsen Zhang, Lei Wang, Quanyu Dai, Zhenhua Dong, Ruiming Tang, Rui Zhang, Li Chen, Ji-Rong Wen
To alleviate the above problems, we propose to build an explainable recommendation dataset with multi-aspect real user labeled ground truths.
no code implementations • 22 Feb 2023 • ZhiCheng He, Weiwen Liu, Wei Guo, Jiarui Qin, Yingxue Zhang, Yaochen Hu, Ruiming Tang
Besides, we elaborate on the industrial practices of UBM methods with the hope of providing insights into the application value of existing UBM solutions.
no code implementations • 7 Feb 2023 • Yuhao Wang, Ha Tsz Lam, Yi Wong, Ziru Liu, Xiangyu Zhao, Yichao Wang, Bo Chen, Huifeng Guo, Ruiming Tang
Multi-task learning (MTL) aims at learning related tasks in a unified model to achieve mutual improvement among tasks considering their shared knowledge.
no code implementations • 12 Dec 2022 • Shiwei Li, Huifeng Guo, Lu Hou, Wei zhang, Xing Tang, Ruiming Tang, Rui Zhang, Ruixuan Li
To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT).
1 code implementation • 17 Nov 2022 • Yunjia Xi, Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Rui Zhang, Ruiming Tang, Yong Yu
Moreover, simply applying a shared network for all the lists fails to capture the commonalities and distinctions in user behaviors on different lists.
no code implementations • 11 Nov 2022 • Haolun Wu, Yingxue Zhang, Chen Ma, Wei Guo, Ruiming Tang, Xue Liu, Mark Coates
To offer accurate and diverse recommendation services, recent methods use auxiliary information to foster the learning process of user and item representations.
1 code implementation • 26 Oct 2022 • Hengyu Zhang, Enming Yuan, Wei Guo, ZhiCheng He, Jiarui Qin, Huifeng Guo, Bo Chen, Xiu Li, Ruiming Tang
Sequential recommendation (SR) plays an important role in personalized recommender systems because it captures dynamic and diverse preferences from users' real-time increasing behaviors.
2 code implementations • 18 Oct 2022 • Xiangyang Li, Bo Chen, Huifeng Guo, Jingjie Li, Chenxu Zhu, Xiang Long, Sujian Li, Yichao Wang, Wei Guo, Longxia Mao, JinXing Liu, Zhenhua Dong, Ruiming Tang
FE-Block module performs fine-grained and early feature interactions to capture the interactive signals between user and item towers explicitly and CIR module leverages a contrastive interaction regularization to further enhance the interactions implicitly.
no code implementations • 5 Sep 2022 • Zhenhua Dong, Zhe Wang, Jun Xu, Ruiming Tang, JiRong Wen
Soon after the invention of the Internet, the recommender system emerged and related technologies have been extensively studied and applied by both academia and industry.
no code implementations • 11 Aug 2022 • Yuxiang Shi, Yue Ding, Bo Chen, YuYang Huang, Yule Wang, Ruiming Tang, Dong Wang
In this paper, we propose a Task aligned Meta-learning based Augmented Graph (TMAG) to address cold-start recommendation.
1 code implementation • 9 Aug 2022 • Fuyuan Lyu, Xing Tang, Hong Zhu, Huifeng Guo, Yingxue Zhang, Ruiming Tang, Xue Liu
To this end, we propose an optimal embedding table learning framework OptEmbed, which provides a practical and general method to find an optimal embedding table for various base CTR models.
Ranked #3 on Click-Through Rate Prediction on KDD12
no code implementations • 3 Aug 2022 • Chang Meng, Ziqi Zhao, Wei Guo, Yingxue Zhang, Haolun Wu, Chen Gao, Dong Li, Xiu Li, Ruiming Tang
More specifically, we propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning (CKML) framework to learn shared and behavior-specific interests for different behaviors.
1 code implementation • 2 Aug 2022 • Haolun Wu, Chen Ma, Yingxue Zhang, Xue Liu, Ruiming Tang, Mark Coates
In order to effectively utilize such information, most research adopts the pairwise ranking method on constructed training triplets (user, positive item, negative item) and aims to distinguish between positive items and negative items for each user.
1 code implementation • 17 Jun 2022 • Lingyue Fu, Jianghao Lin, Weiwen Liu, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
However, with the development of user interface (UI) design, the layout of displayed items on a result page tends to be multi-block (i. e., multi-list) style instead of a single list, which requires different assumptions to model user behaviors more accurately.
1 code implementation • Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021 • Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Shuai Li, Ruiming Tang, Xiuqiang He, Jianye Hao, Yong Yu
To better exploit search logs and model users' behavior patterns, numerous click models are proposed to extract users' implicit interaction feedback.
no code implementations • 5 Jun 2022 • Yankai Chen, Huifeng Guo, Yingxue Zhang, Chen Ma, Ruiming Tang, Jingjie Li, Irwin King
Learning vectorized embeddings is at the core of various recommender systems for user-item matching.
1 code implementation • 26 Apr 2022 • Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, Ruiming Tang
In this work, we develop a new learning paradigm named Cross Pairwise Ranking (CPR) that achieves unbiased recommendation without knowing the exposure mechanism.
no code implementations • 24 Apr 2022 • Guohao Cai, Jieming Zhu, Quanyu Dai, Zhenhua Dong, Xiuqiang He, Ruiming Tang, Rui Zhang
Deep learning-based recommendation has become a widely adopted technique in various online applications.
1 code implementation • 20 Apr 2022 • Yunjia Xi, Weiwen Liu, Jieming Zhu, Xilong Zhao, Xinyi Dai, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
MIR combines low-level cross-item interaction and high-level set-to-list interaction, where we view the candidate items to be reranked as a set and the users' behavior history in chronological order as a list.
no code implementations • 4 Apr 2022 • Bo Chen, Xiangyu Zhao, Yejing Wang, Wenqi Fan, Huifeng Guo, Ruiming Tang
Deep recommender systems (DRS) are critical for current commercial online service providers, which address the issue of information overload by recommending items that are tailored to the user's interests and preferences.
no code implementations • 23 Mar 2022 • Yi Li, Jieming Zhu, Weiwen Liu, Liangcai Su, Guohao Cai, Qi Zhang, Ruiming Tang, Xi Xiao, Xiuqiang He
Specifically, PEAR not only captures feature-level and item-level interactions, but also models item contexts from both the initial ranking list and the historical clicked item list.
1 code implementation • 14 Feb 2022 • Weiwen Liu, Yunjia Xi, Jiarui Qin, Fei Sun, Bo Chen, Weinan Zhang, Rui Zhang, Ruiming Tang
As the final stage of the multi-stage recommender system (MRS), re-ranking directly affects user experience and satisfaction by rearranging the input ranking lists, and thereby plays a critical role in MRS. With the advances in deep learning, neural re-ranking has become a trending topic and been widely applied in industrial applications.
1 code implementation • 27 Jan 2022 • Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
Neural architecture search (NAS) has shown encouraging results in automating the architecture design.
no code implementations • 3 Dec 2021 • Yankai Chen, Yifei Zhang, Yingxue Zhang, Huifeng Guo, Jingjie Li, Ruiming Tang, Xiuqiang He, Irwin King
In this work, we study the problem of representation learning for recommendation with 1-bit quantization.
no code implementations • 30 Nov 2021 • Wei Guo, Can Zhang, ZhiCheng He, Jiarui Qin, Huifeng Guo, Bo Chen, Ruiming Tang, Xiuqiang He, Rui Zhang
With the help of two novel CNN-based multi-interest extractors, self-supervision signals are discovered with full considerations of different interest representations (point-wise and union-wise), interest dependencies (short-range and long-range), and interest correlations (inter-item and intra-item).
1 code implementation • NeurIPS 2021 • Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, Zhenguo Li
Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency.
no code implementations • 16 Nov 2021 • Handong Ma, Jiawei Hou, Chenxu Zhu, Weinan Zhang, Ruiming Tang, Jincai Lai, Jieming Zhu, Xiuqiang He, Yong Yu
Pseudo relevance feedback (PRF) automatically performs query expansion based on top-retrieved documents to better represent the user's information need so as to improve the search results.
1 code implementation • 5 Nov 2021 • Chenxu Zhu, Bo Chen, Weinan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu
To address these three issues mentioned above, we propose Automatic Interaction Machine (AIM) with three core components, namely, Feature Interaction Search (FIS), Interaction Function Search (IFS) and Embedding Dimension Search (EDS), to select significant feature interactions, appropriate interaction functions and necessary embedding dimensions automatically in a unified framework.
2 code implementations • Proceedings of the 30th ACM International Conference on Information & Knowledge Management 2021 • Bo Chen, Yichao Wang, Zhirong Liu, Ruiming Tang, Wei Guo, Hongkun Zheng, Weiwei Yao, Muyu Zhang, Xiuqiang He
The state-of-the-art deep CTR models with parallel structure (e. g., DCN) learn explicit and implicit feature interactions through independent parallel networks.
no code implementations • 25 Oct 2021 • Yong Gao, Huifeng Guo, Dandan Lin, Yingxue Zhang, Ruiming Tang, Xiuqiang He
It is compatible with existing GNN-based approaches for news recommendation and can capture both collaborative and content filtering information simultaneously.
no code implementations • 18 Oct 2021 • Yunjia Xi, Weiwen Liu, Xinyi Dai, Ruiming Tang, Weinan Zhang, Qing Liu, Xiuqiang He, Yong Yu
As a critical task for large-scale commercial recommender systems, reranking has shown the potential of improving recommendation results by uncovering mutual influence among items.
no code implementations • 28 Sep 2021 • Yunzhe Li, Yue Ding, Bo Chen, Xin Xin, Yule Wang, Yuxiang Shi, Ruiming Tang, Dong Wang
In this paper, we propose a novel time-aware sequential recommendation framework called Social Temporal Excitation Networks (STEN), which introduces temporal point processes to model the fine-grained impact of friends' behaviors on the user s dynamic interests in an event-level direct paradigm.
1 code implementation • 11 Aug 2021 • Jiarui Qin, Weinan Zhang, Rong Su, Zhirong Liu, Weiwen Liu, Ruiming Tang, Xiuqiang He, Yong Yu
Prediction over tabular data is an essential task in many data science applications such as recommender systems, online advertising, medical treatment, etc.
1 code implementation • 3 Aug 2021 • Fuyuan Lyu, Xing Tang, Huifeng Guo, Ruiming Tang, Xiuqiang He, Rui Zhang, Xue Liu
As feature interactions bring in non-linearity, they are widely adopted to improve the performance of CTR prediction models.
Ranked #1 on Click-Through Rate Prediction on Avazu
no code implementations • 25 Jun 2021 • Weiwen Liu, Feng Liu, Ruiming Tang, Ben Liao, Guangyong Chen, Pheng Ann Heng
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
no code implementations • 9 Jun 2021 • Xiangli Yang, Qing Liu, Rong Su, Ruiming Tang, Zhirong Liu, Xiuqiang He
The field-wise transfer policy decides how the pre-trained embedding representations are frozen or fine-tuned based on the given instance from the target domain.
no code implementations • 1 Jun 2021 • Wei Guo, Rong Su, Renhao Tan, Huifeng Guo, Yingxue Zhang, Zhirong Liu, Ruiming Tang, Xiuqiang He
To solve these problems, we propose a novel module named Dual Graph enhanced Embedding, which is compatible with various CTR prediction models to alleviate these two problems.
no code implementations • 21 Apr 2021 • Weinan Zhang, Jiarui Qin, Wei Guo, Ruiming Tang, Xiuqiang He
In this survey, we provide a comprehensive review of deep learning models for CTR estimation tasks.
1 code implementation • 17 Apr 2021 • Huifeng Guo, Wei Guo, Yong Gao, Ruiming Tang, Xiuqiang He, Wenzhi Liu
Different from the models with dense training data, the training data for CTR models is usually high-dimensional and sparse.
1 code implementation • 13 Apr 2021 • Xinyi Dai, Jianghao Lin, Weinan Zhang, Shuai Li, Weiwen Liu, Ruiming Tang, Xiuqiang He, Jianye Hao, Jun Wang, Yong Yu
Modern information retrieval systems, including web search, ads placement, and recommender systems, typically rely on learning from user feedback.
no code implementations • 13 Jan 2021 • Chen Ma, Liheng Ma, Yingxue Zhang, Ruiming Tang, Xue Liu, Mark Coates
Personalized recommender systems are playing an increasingly important role as more content and services become available and users struggle to identify what might interest them.
1 code implementation • 16 Dec 2020 • Huifeng Guo, Bo Chen, Ruiming Tang, Weinan Zhang, Zhenguo Li, Xiuqiang He
In this paper, we propose a novel embedding learning framework for numerical features in CTR prediction (AutoDis) with high model capacity, end-to-end training and unique representation properties preserved.
no code implementations • 1 Nov 2020 • Xinyi Dai, Jiawei Hou, Qing Liu, Yunjia Xi, Ruiming Tang, Weinan Zhang, Xiuqiang He, Jun Wang, Yong Yu
To this end, we propose a novel ranking framework called U-rank that directly optimizes the expected utility of the ranking list.
no code implementations • 4 Sep 2020 • Yichao Wang, Huifeng Guo, Ruiming Tang, Zhirong Liu, Xiuqiang He
Deep learning models in recommender systems are usually trained in the batch mode, namely iteratively trained on a fixed-size window of training data.
1 code implementation • 26 Aug 2020 • Kelong Mao, Xi Xiao, Jieming Zhu, Biao Lu, Ruiming Tang, Xiuqiang He
In this work, we propose to formulate item tagging as a link prediction problem between item nodes and tag nodes.
1 code implementation • 25 Aug 2020 • Yishi Xu, Yingxue Zhang, Wei Guo, Huifeng Guo, Ruiming Tang, Mark Coates
We develop a Graph Structure Aware Incremental Learning framework, GraphSAIL, to address the commonly experienced catastrophic forgetting problem that occurs when training a model in an incremental fashion.
1 code implementation • Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2020 • Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates
Because of the multitude of relationships existing in recommender systems, Graph Neural Networks (GNNs) based approaches have been proposed to better characterize the various relationships between a user and items while modeling a user's preferences.
no code implementations • 18 Jun 2020 • Sijin Zhou, Xinyi Dai, Haokun Chen, Wei-Nan Zhang, Kan Ren, Ruiming Tang, Xiuqiang He, Yong Yu
Interactive recommender system (IRS) has drawn huge attention because of its flexible recommendation strategy and the consideration of optimal long-term user experiences.
no code implementations • 14 Apr 2020 • Yichao Wang, Xiangyu Zhang, Zhirong Liu, Zhenhua Dong, Xinhua Feng, Ruiming Tang, Xiuqiang He
To overcome such limitation, our re-ranking model proposes a personalized DPP to model the trade-off between accuracy and diversity for each individual user.
4 code implementations • 25 Mar 2020 • Bin Liu, Chenxu Zhu, Guilin Li, Wei-Nan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu
By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model.
Ranked #31 on Click-Through Rate Prediction on Criteo
no code implementations • 1 Jan 2020 • Jianing Sun, Yingxue Zhang, Chen Ma, Mark Coates, Huifeng Guo, Ruiming Tang, Xiuqiang He
In this work, we develop a graph convolution-based recommendation framework, named Multi-Graph Convolution Collaborative Filtering (Multi-GCCF), which explicitly incorporates multiple graphs in the embedding learning process.
6 code implementations • 9 Apr 2019 • Bin Liu, Ruiming Tang, Yingzhi Chen, Jinkai Yu, Huifeng Guo, Yuzhou Zhang
Easy-to-use, Modular and Extendible package of deep-learning based CTR models. DeepFM, DeepInterestNetwork(DIN), DeepInterestEvolutionNetwork(DIEN), DeepCrossNetwork(DCN), AttentionalFactorizationMachine(AFM), Neural Factorization Machine(NFM), AutoInt, Deep Session Interest Network(DSIN)
Ranked #1 on Click-Through Rate Prediction on Huawei App Store
no code implementations • 14 Nov 2018 • Haokun Chen, Xinyi Dai, Han Cai, Wei-Nan Zhang, Xuejian Wang, Ruiming Tang, Yuzhou Zhang, Yong Yu
Reinforcement learning (RL) has recently been introduced to interactive recommender systems (IRS) because of its nature of learning from dynamic interactions and planning for long-run performance.
5 code implementations • 29 Oct 2018 • Feng Liu, Ruiming Tang, Xutao Li, Wei-Nan Zhang, Yunming Ye, Haokun Chen, Huifeng Guo, Yuzhou Zhang
The DRR framework treats recommendation as a sequential decision making procedure and adopts an "Actor-Critic" reinforcement learning scheme to model the interactions between the users and recommender systems, which can consider both the dynamic adaptation and long-term rewards.
8 code implementations • 1 Jul 2018 • Yanru Qu, Bohui Fang, Wei-Nan Zhang, Ruiming Tang, Minzhe Niu, Huifeng Guo, Yong Yu, Xiuqiang He
User response prediction is a crucial component for personalized information retrieval and filtering scenarios, such as recommender system and web search.
8 code implementations • 12 Apr 2018 • Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He, Zhenhua Dong
In this paper, we study two instances of DeepFM where its "deep" component is DNN and PNN respectively, for which we denote as DeepFM-D and DeepFM-P. Comprehensive experiments are conducted to demonstrate the effectiveness of DeepFM-D and DeepFM-P over the existing models for CTR prediction, on both benchmark data and commercial data.
22 code implementations • 13 Mar 2017 • Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems.
Ranked #1 on Click-Through Rate Prediction on Company*