Search Results for author: Zhankui He

Found 10 papers, 5 papers with code

UCEpic: Unifying Aspect Planning and Lexical Constraints for Explainable Recommendation

no code implementations28 Sep 2022 Jiacheng Li, Zhankui He, Jingbo Shang, Julian McAuley

In this paper, we propose UCEpic, an explanation generation model that unifies aspect planning and lexical constraints for controllable personalized generation.

Explainable Recommendation Explanation Generation +2

Bundle MCR: Towards Conversational Bundle Recommendation

1 code implementation26 Jul 2022 Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, Julian McAuley

MCR, which uses a conversational paradigm to elicit user interests by asking user preferences on tags (e. g., categories or attributes) and handling user feedback across multiple rounds, is an emerging recommendation setting to acquire user feedback and narrow down the output space, but has not been explored in the context of bundle recommendation.

Recommendation Systems

Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

no code implementations30 Jun 2022 An Yan, Zhankui He, Jiacheng Li, Tianyang Zhang, Julian McAuley

In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations.

Contrastive Learning

Leashing the Inner Demons: Self-Detoxification for Language Models

no code implementations6 Mar 2022 Canwen Xu, Zexue He, Zhankui He, Julian McAuley

Language models (LMs) can reproduce (or amplify) toxic language seen during training, which poses a risk to their practical application.

Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction

1 code implementation1 Sep 2021 Zhenrui Yue, Zhankui He, Huimin Zeng, Julian McAuley

Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation.

Data Poisoning Knowledge Distillation +5

Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement

no code implementations22 Aug 2019 Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides

In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.

Knowledge Distillation

MEAL: Multi-Model Ensemble via Adversarial Learning

1 code implementation6 Dec 2018 Zhiqiang Shen, Zhankui He, xiangyang xue

In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN.

Adversarial Personalized Ranking for Recommendation

1 code implementation12 Aug 2018 Xiangnan He, Zhankui He, Xiaoyu Du, Tat-Seng Chua

Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR --- by optimizing MF with APR, it outperforms BPR with a relative improvement of 11. 2% on average and achieves state-of-the-art performance for item recommendation.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.