no code implementations • 4 Apr 2024 • Chengkai Huang, Rui Wang, Kaige Xie, Tong Yu, Lina Yao
Despite their great success, the knowledge provided by the retrieval process is not always useful for improving the model prediction, since in some samples LLMs may already be quite knowledgeable and thus be able to answer the question correctly without retrieval.
no code implementations • 17 Feb 2024 • Chengkai Huang, Tong Yu, Kaige Xie, Shuai Zhang, Lina Yao, Julian McAuley
Recently, Foundation Models (FMs), with their extensive knowledge bases and complex architectures, have offered unique opportunities within the realm of recommender systems (RSs).
no code implementations • 13 Aug 2022 • Guanglin Zhou, Chengkai Huang, Xiaocong Chen, Xiwei Xu, Chen Wang, Liming Zhu, Lina Yao
Recognizing that confounders may be elusive, we propose a contrastive self-supervised learning to minimize exposure bias, employing inverse propensity scores and expanding the positive sample set.
no code implementations • 29 Nov 2021 • Zhiqiang Liu, Chengkai Huang, Yanxia Liu
To achieve this goal, a small student model is trained to exploit the knowledge of a large well-trained teacher model.
1 code implementation • 23 Nov 2021 • Zhiqiang Liu, Yanxia Liu, Chengkai Huang
However, to the best of our knowledge, KD and DML have never been jointly explored in a unified framework to solve the knowledge distillation problem.