Such a scheme can be denoted as "text -> code -> representation".
In this paper, we propose UCEpic, an explanation generation model that unifies aspect planning and lexical constraints for controllable personalized generation.
MCR, which uses a conversational paradigm to elicit user interests by asking user preferences on tags (e. g., categories or attributes) and handling user feedback across multiple rounds, is an emerging recommendation setting to acquire user feedback and narrow down the output space, but has not been explored in the context of bundle recommendation.
In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations.
Language models (LMs) can reproduce (or amplify) toxic language seen during training, which poses a risk to their practical application.
Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation.
In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.
In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN.
As such, the key to an item-based CF method is in the estimation of item similarities.
Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR --- by optimizing MF with APR, it outperforms BPR with a relative improvement of 11. 2% on average and achieves state-of-the-art performance for item recommendation.