Search Results for author: Chengkai Huang

Found 5 papers, 1 papers with code

Learn When (not) to Trust Language Models: A Privacy-Centric Adaptive Model-Aware Approach

no code implementations4 Apr 2024 Chengkai Huang, Rui Wang, Kaige Xie, Tong Yu, Lina Yao

Despite their great success, the knowledge provided by the retrieval process is not always useful for improving the model prediction, since in some samples LLMs may already be quite knowledgeable and thus be able to answer the question correctly without retrieval.

Continual Learning Retrieval

Foundation Models for Recommender Systems: A Survey and New Perspectives

no code implementations17 Feb 2024 Chengkai Huang, Tong Yu, Kaige Xie, Shuai Zhang, Lina Yao, Julian McAuley

Recently, Foundation Models (FMs), with their extensive knowledge bases and complex architectures, have offered unique opportunities within the realm of recommender systems (RSs).

Recommendation Systems Representation Learning

Contrastive Counterfactual Learning for Causality-aware Interpretable Recommender Systems

no code implementations13 Aug 2022 Guanglin Zhou, Chengkai Huang, Xiaocong Chen, Xiwei Xu, Chen Wang, Liming Zhu, Lina Yao

Recognizing that confounders may be elusive, we propose a contrastive self-supervised learning to minimize exposure bias, employing inverse propensity scores and expanding the positive sample set.

Causal Inference counterfactual +2

Improved Knowledge Distillation via Adversarial Collaboration

no code implementations29 Nov 2021 Zhiqiang Liu, Chengkai Huang, Yanxia Liu

To achieve this goal, a small student model is trained to exploit the knowledge of a large well-trained teacher model.

Knowledge Distillation

Semi-Online Knowledge Distillation

1 code implementation23 Nov 2021 Zhiqiang Liu, Yanxia Liu, Chengkai Huang

However, to the best of our knowledge, KD and DML have never been jointly explored in a unified framework to solve the knowledge distillation problem.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.