no code implementations • 1 Dec 2023 • Shilin Qu, Weiqing Wang, Yuan-Fang Li, Xin Zhou, Fajie Yuan
HGraphormer injects the hypergraph structure information (local information) into Transformers (global information) by combining the attention matrix and hypergraph Laplacian.
no code implementations • 26 Oct 2023 • Weixin Chen, Li Chen, Yongxin Ni, Yuhan Zhao, Fajie Yuan, Yongfeng Zhang
Recently, multimodal recommendations have gained increasing attention for effectively addressing the data sparsity problem by incorporating modality-based representations.
1 code implementation • 27 Sep 2023 • Yongxin Ni, Yu Cheng, Xiangyan Liu, Junchen Fu, Youhua Li, Xiangnan He, Yongfeng Zhang, Fajie Yuan
Micro-videos have recently gained immense popularity, sparking critical research in micro-video recommendation with significant implications for the entertainment, advertising, and e-commerce industries.
1 code implementation • 14 Sep 2023 • JiaQi Zhang, Yu Cheng, Yongxin Ni, Yunzhu Pan, Zheng Yuan, Junchen Fu, Youhua Li, Jie Wang, Fajie Yuan
Learning a recommender system model from an item's raw modality features (such as image, text, audio, etc.
2 code implementations • 13 Sep 2023 • Yu Cheng, Yunzhu Pan, JiaQi Zhang, Yongxin Ni, Aixin Sun, Fajie Yuan
Then, to show the effectiveness of the dataset's image features, we substitute the itemID embeddings (from IDNet) with a powerful vision encoder that represents items using their raw image pixels.
Ranked #1 on
Recommendation Systems
on PixelRec
no code implementations • 24 May 2023 • Junchen Fu, Fajie Yuan, Yu Song, Zheng Yuan, Mingyue Cheng, Shenghui Cheng, JiaQi Zhang, Jie Wang, Yunzhu Pan
If yes, we benchmark these existing adapters, which have been shown to be effective in NLP and CV tasks, in the item recommendation settings.
no code implementations • 19 May 2023 • Ruyu Li, Wenhao Deng, Yu Cheng, Zheng Yuan, JiaQi Zhang, Fajie Yuan
Furthermore, we compare the performance of the TCF paradigm utilizing the most powerful LMs to the currently dominant ID embedding-based paradigm and investigate the transferability of this TCF paradigm.
1 code implementation • 24 Mar 2023 • Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, Yongxin Ni
In fact, this question was answered ten years ago when IDRec beats MoRec by a strong margin in both recommendation accuracy and efficiency.
1 code implementation • 13 Oct 2022 • Guanghu Yuan, Fajie Yuan, Yudong Li, Beibei Kong, Shujie Li, Lei Chen, Min Yang, Chenyun Yu, Bo Hu, Zang Li, Yu Xu, XiaoHu Qie
Existing benchmark datasets for recommender systems (RS) either are created at a small scale or involve very limited forms of user feedback.
1 code implementation • 14 Jun 2022 • Mingyang Hu, Fajie Yuan, Kevin K. Yang, Fusong Ju, Jin Su, Hui Wang, Fei Yang, Qiuyang Ding
Large-scale Protein Language Models (PLMs) have improved performance in protein prediction tasks, ranging from 3D structure prediction to various function predictions.
no code implementations • 13 Jun 2022 • Jie Wang, Fajie Yuan, Mingyue Cheng, Joemon M. Jose, Chenyun Yu, Beibei Kong, Xiangnan He, Zhijin Wang, Bo Hu, Zang Li
That is, the users and the interacted items are represented by their unique IDs, which are generally not shareable across different systems or platforms.
no code implementations • 31 Oct 2021 • Yang Sun, Fajie Yuan, Min Yang, Alexandros Karatzoglou, Shen Li, Xiaoyan Zhao
In this paper, we plan to exploit such redundancy phenomena to improve the performance of RS.
1 code implementation • ACL 2021 • Binzong Geng, Fajie Yuan, Qiancheng Xu, Ying Shen, Ruifeng Xu, Min Yang
This ability to learn consecutive tasks without forgetting how to perform previously trained problems is essential for developing an online dialogue system.
no code implementations • 15 Jul 2021 • Lei Chen, Fajie Yuan, Jiaxi Yang, Min Yang, Chengming Li
To realize such a goal, we propose AdaRec, a knowledge distillation (KD) framework which compresses knowledge of a teacher model into a student model adaptively according to its recommendation scene by using differentiable Neural Architecture Search (NAS).
1 code implementation • 21 Jun 2021 • Binzong Geng, Min Yang, Fajie Yuan, Shupeng Wang, Xiang Ao, Ruifeng Xu
In this paper, we propose a novel iterative network pruning with uncertainty regularization method for lifelong sentiment classification (IPRLS), which leverages the principles of network pruning and weight regularization.
no code implementations • 15 Jun 2021 • Lei Chen, Fajie Yuan, Jiaxi Yang, Xiangnan He, Chengming Li, Min Yang
Fine-tuning works as an effective transfer learning technique for this objective, which adapts the parameters of a pre-trained model from the source domain to the target domain.
1 code implementation • 14 Dec 2020 • Jiachun Wang, Fajie Yuan, Jian Chen, Qingyao Wu, Min Yang, Yang Sun, Guoxiao Zhang
We validate the performance of StackRec by instantiating it with four state-of-the-art SR models in three practical scenarios with real-world datasets.
2 code implementations • 29 Sep 2020 • Fajie Yuan, Guoxiao Zhang, Alexandros Karatzoglou, Joemon Jose, Beibei Kong, Yudong Li
In this paper, we delve on research to continually learn user representations task by task, whereby new tasks are learned while using partial parameters from old ones.
1 code implementation • 28 Apr 2020 • Shilin Qu, Fajie Yuan, Guibing Guo, Liguang Zhang, Wei Wei
Specifically, our framework divides proximal information units into chunks, and performs memory access at certain time steps, whereby the number of memory operations can be greatly reduced.
1 code implementation • 21 Apr 2020 • Yang Sun, Fajie Yuan, Min Yang, Guoao Wei, Zhou Zhao, Duo Liu
Current state-of-the-art sequential recommender models are typically based on a sandwich-structured deep neural network, where one or more middle (hidden) layers are placed between the input embedding layer and output softmax layer.
1 code implementation • 13 Jan 2020 • Fajie Yuan, Xiangnan He, Alexandros Karatzoglou, Liguang Zhang
To overcome this issue, we develop a parameter efficient transfer learning architecture, termed as PeterRec, which can be configured on-the-fly to various downstream tasks.
1 code implementation • 26 Jun 2019 • Xiaoyu Du, Xiangnan He, Fajie Yuan, Jinhui Tang, Zhiguang Qin, Tat-Seng Chua
In this work, we emphasize on modeling the correlations among embedding dimensions in neural networks to pursue higher effectiveness for CF.
no code implementations • 11 Jun 2019 • Fajie Yuan, Xiangnan He, Haochuan Jiang, Guibing Guo, Jian Xiong, Zhezhao Xu, Yilin Xiong
To capture the sequential dependencies, existing methods resort either to data augmentation techniques or left-to-right style autoregressive training. Since these methods are aimed to model the sequential nature of user behaviors, they ignore the future data of a target interaction when constructing the prediction model for it.
1 code implementation • 19 Sep 2018 • Jinhui Tang, Xiaoyu Du, Xiangnan He, Fajie Yuan, Qi Tian, Tat-Seng Chua
To this end, we propose a novel solution named Adversarial Multimedia Recommendation (AMR), which can lead to a more robust multimedia recommender model by using adversarial learning.
Information Retrieval Multimedia
3 code implementations • 15 Aug 2018 • Fajie Yuan, Alexandros Karatzoglou, Ioannis Arapakis, Joemon M. Jose, Xiangnan He
Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation.
no code implementations • ACL 2018 • Xin Xin, Fajie Yuan, Xiangnan He, Joemon M. Jose
Stochastic Gradient Descent (SGD) with negative sampling is the most prevalent approach to learn word representations.
no code implementations • 5 Jan 2018 • Guibing Guo, Songlin Zhai, Fajie Yuan, Yu-An Liu, Xingwei Wang
Jointing visual-semantic embeddings (VSE) have become a research hotpot for the task of image annotation, which suffers from the issue of semantic gap, i. e., the gap between images' visual features (low-level) and labels' semantic features (high-level).
no code implementations • 26 Oct 2017 • Long Chen, Fajie Yuan, Joemon M. Jose, Wei-Nan Zhang
Although the word-popularity based negative sampler has shown superb performance in the skip-gram model, the theoretical motivation behind oversampling popular (non-observed) words as negative samples is still not well understood.