no code implementations • 18 Sep 2024 • Yashar Deldjoo, Zhankui He, Julian McAuley, Anton Korikov, Scott Sanner, Arnau Ramisa, Rene Vidal, Maheswaran Sathiamoorthy, Atoosa Kasrizadeh, Silvia Milano, Francesco Ricci
Generative models are a class of AI models capable of creating new instances of data by learning and sampling from their statistical distributions.
no code implementations • 17 Sep 2024 • Arnau Ramisa, Rene Vidal, Yashar Deldjoo, Zhankui He, Julian McAuley, Anton Korikov, Scott Sanner, Mahesh Sathiamoorthy, Atoosa Kasrizadeh, Silvia Milano, Francesco Ricci
Many recommendation systems limit user inputs to text strings or behavior signals such as clicks and purchases, and system outputs to a list of products sorted by relevance.
no code implementations • 20 Aug 2024 • Anton Korikov, Scott Sanner, Yashar Deldjoo, Zhankui He, Julian McAuley, Arnau Ramisa, Rene Vidal, Mahesh Sathiamoorthy, Atoosa Kasrizadeh, Silvia Milano, Francesco Ricci
While previous chapters focused on recommendation systems (RSs) based on standardized, non-verbal user feedback such as purchases, views, and clicks -- the advent of LLMs has unlocked the use of natural language (NL) interactions for recommendation.
1 code implementation • 4 Jun 2024 • Yueqi Wang, Zhankui He, Zhenrui Yue, Julian McAuley, Dong Wang
We start by tracing the AE/AR debate back to its origin through a systematic re-evaluation of SASRec and BERT4Rec, discovering that AR models generally surpass AE models in sequential recommendation.
no code implementations • 20 May 2024 • Zhankui He, Zhouhang Xie, Harald Steck, Dawen Liang, Rahul Jha, Nathan Kallus, Julian McAuley
The RTA framework marries the benefits of both LLMs and traditional recommender systems (RecSys): understanding complex queries as LLMs do; while efficiently controlling the recommended item distributions in conversational recommendations as traditional RecSys do.
1 code implementation • 31 Mar 2024 • Yashar Deldjoo, Zhankui He, Julian McAuley, Anton Korikov, Scott Sanner, Arnau Ramisa, René Vidal, Maheswaran Sathiamoorthy, Atoosa Kasirzadeh, Silvia Milano
Traditional recommender systems (RS) typically use user-item rating histories as their main data source.
1 code implementation • 13 Mar 2024 • Se-eun Yoon, Zhankui He, Jessica Maria Echterhoff, Julian McAuley
Synthetic users are cost-effective proxies for real users in the evaluation of conversational recommender systems.
no code implementations • 11 Mar 2024 • Junda Wu, Cheng-Chun Chang, Tong Yu, Zhankui He, Jianing Wang, Yupeng Hou, Julian McAuley
Based on the retrieved user-item interactions, the LLM can analyze shared and distinct preferences among users, and summarize the patterns indicating which types of users would be attracted by certain items.
1 code implementation • 6 Mar 2024 • Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, Julian McAuley
This paper introduces BLaIR, a series of pretrained sentence embedding models specialized for recommendation scenarios.
1 code implementation • 23 Feb 2024 • Zeyuan Zhang, Tanmay Laud, Zihang He, Xiaojie Chen, Xinshuang Liu, Zhouhang Xie, Julian McAuley, Zhankui He
We present a new Python toolkit called RecWizard for Conversational Recommender Systems (CRS).
1 code implementation • 17 Dec 2023 • Yu Wang, Zexue He, Zhankui He, Hao Xu, Julian McAuley
This fine-tuning allows the model to generate explanations that convey the compatibility relationships between items.
1 code implementation • 3 Oct 2023 • Zhenrui Yue, Yueqi Wang, Zhankui He, Huimin Zeng, Julian McAuley, Dong Wang
State-of-the-art sequential recommendation relies heavily on self-attention-based recommender models.
1 code implementation • 27 Sep 2023 • Hengchang Hu, Yiming Cao, Zhankui He, Samson Tan, Min-Yen Kan
We leverage the Adaptive Adversarial perturbation based on the widely-applied Factorization Machine (AAFM) as our backbone model.
1 code implementation • 19 Aug 2023 • Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions.
1 code implementation • 4 Jun 2023 • Shuchang Liu, Qingpeng Cai, Zhankui He, Bowen Sun, Julian McAuley, Dong Zheng, Peng Jiang, Kun Gai
In this work, we aim to learn a policy that can generate sufficiently diverse item lists for users while maintaining high recommendation quality.
1 code implementation • 22 Oct 2022 • Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao
Based on this representation scheme, we further propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.
1 code implementation • 28 Sep 2022 • Jiacheng Li, Zhankui He, Jingbo Shang, Julian McAuley
Then, to obtain personalized explanations under this framework of insertion-based generation, we design a method of incorporating aspect planning and personalized references into the insertion process.
1 code implementation • 26 Jul 2022 • Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, Julian McAuley
MCR, which uses a conversational paradigm to elicit user interests by asking user preferences on tags (e. g., categories or attributes) and handling user feedback across multiple rounds, is an emerging recommendation setting to acquire user feedback and narrow down the output space, but has not been explored in the context of bundle recommendation.
no code implementations • 30 Jun 2022 • An Yan, Zhankui He, Jiacheng Li, Tianyang Zhang, Julian McAuley
In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations.
no code implementations • 6 Mar 2022 • Canwen Xu, Zexue He, Zhankui He, Julian McAuley
Language models (LMs) can reproduce (or amplify) toxic language seen during training, which poses a risk to their practical application.
1 code implementation • 1 Sep 2021 • Zhenrui Yue, Zhankui He, Huimin Zeng, Julian McAuley
Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation.
no code implementations • 22 Aug 2019 • Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides
In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.
1 code implementation • 6 Dec 2018 • Zhiqiang Shen, Zhankui He, xiangyang xue
In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN.
3 code implementations • 19 Sep 2018 • Xiangnan He, Zhankui He, Jingkuan Song, Zhenguang Liu, Yu-Gang Jiang, Tat-Seng Chua
As such, the key to an item-based CF method is in the estimation of item similarities.
1 code implementation • 12 Aug 2018 • Xiangnan He, Zhankui He, Xiaoyu Du, Tat-Seng Chua
Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR --- by optimizing MF with APR, it outperforms BPR with a relative improvement of 11. 2% on average and achieves state-of-the-art performance for item recommendation.