Search Results for author: Zhankui He

Found 19 papers, 15 papers with code

CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation

no code implementations11 Mar 2024 Junda Wu, Cheng-Chun Chang, Tong Yu, Zhankui He, Jianing Wang, Yupeng Hou, Julian McAuley

Based on the retrieved user-item interactions, the LLM can analyze shared and distinct preferences among users, and summarize the patterns indicating which types of users would be attracted by certain items.

Recommendation Systems Reinforcement Learning (RL) +1

Bridging Language and Items for Retrieval and Recommendation

1 code implementation6 Mar 2024 Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, Julian McAuley

This paper introduces BLaIR, a series of pretrained sentence embedding models specialized for recommendation scenarios.

Retrieval Sentence +2

Deciphering Compatibility Relationships with Textual Descriptions via Extraction and Explanation

1 code implementation17 Dec 2023 Yu Wang, Zexue He, Zhankui He, Hao Xu, Julian McAuley

This fine-tuning allows the model to generate explanations that convey the compatibility relationships between items.

Linear Recurrent Units for Sequential Recommendation

1 code implementation3 Oct 2023 Zhenrui Yue, Yueqi Wang, Zhankui He, Huimin Zeng, Julian McAuley, Dong Wang

State-of-the-art sequential recommendation relies heavily on self-attention-based recommender models.

Language Modelling Sequential Recommendation

Automatic Feature Fairness in Recommendation via Adversaries

1 code implementation27 Sep 2023 Hengchang Hu, Yiming Cao, Zhankui He, Samson Tan, Min-Yen Kan

We leverage the Adaptive Adversarial perturbation based on the widely-applied Factorization Machine (AAFM) as our backbone model.

Fairness Recommendation Systems

Large Language Models as Zero-Shot Conversational Recommenders

1 code implementation19 Aug 2023 Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley

In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions.

Generative Flow Network for Listwise Recommendation

1 code implementation4 Jun 2023 Shuchang Liu, Qingpeng Cai, Zhankui He, Bowen Sun, Julian McAuley, Dong Zheng, Peng Jiang, Kun Gai

In this work, we aim to learn a policy that can generate sufficiently diverse item lists for users while maintaining high recommendation quality.

Recommendation Systems

Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders

1 code implementation22 Oct 2022 Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao

Based on this representation scheme, we further propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.

Language Modelling Recommendation Systems +1

UCEpic: Unifying Aspect Planning and Lexical Constraints for Generating Explanations in Recommendation

1 code implementation28 Sep 2022 Jiacheng Li, Zhankui He, Jingbo Shang, Julian McAuley

Then, to obtain personalized explanations under this framework of insertion-based generation, we design a method of incorporating aspect planning and personalized references into the insertion process.

Explainable Recommendation Explanation Generation +2

Bundle MCR: Towards Conversational Bundle Recommendation

1 code implementation26 Jul 2022 Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, Julian McAuley

MCR, which uses a conversational paradigm to elicit user interests by asking user preferences on tags (e. g., categories or attributes) and handling user feedback across multiple rounds, is an emerging recommendation setting to acquire user feedback and narrow down the output space, but has not been explored in the context of bundle recommendation.

Recommendation Systems

Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

no code implementations30 Jun 2022 An Yan, Zhankui He, Jiacheng Li, Tianyang Zhang, Julian McAuley

In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations.

Contrastive Learning

Leashing the Inner Demons: Self-Detoxification for Language Models

no code implementations6 Mar 2022 Canwen Xu, Zexue He, Zhankui He, Julian McAuley

Language models (LMs) can reproduce (or amplify) toxic language seen during training, which poses a risk to their practical application.

Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction

1 code implementation1 Sep 2021 Zhenrui Yue, Zhankui He, Huimin Zeng, Julian McAuley

Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation.

Data Poisoning Knowledge Distillation +5

Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement

no code implementations22 Aug 2019 Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides

In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.

Knowledge Distillation Missing Labels

MEAL: Multi-Model Ensemble via Adversarial Learning

1 code implementation6 Dec 2018 Zhiqiang Shen, Zhankui He, xiangyang xue

In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN.

Adversarial Personalized Ranking for Recommendation

1 code implementation12 Aug 2018 Xiangnan He, Zhankui He, Xiaoyu Du, Tat-Seng Chua

Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR --- by optimizing MF with APR, it outperforms BPR with a relative improvement of 11. 2% on average and achieves state-of-the-art performance for item recommendation.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.