Search Results for author: Zhangchi Zhu

Found 4 papers, 1 papers with code

Understanding the Role of Cross-Entropy Loss in Fairly Evaluating Large Language Model-based Recommendation

no code implementations9 Feb 2024 Cong Xu, Zhangchi Zhu, Jun Wang, Jianyong Wang, Wei zhang

Large language models (LLMs) have gained much attention in the recommendation community; some studies have observed that LLMs, fine-tuned by the cross-entropy loss with a full softmax, could achieve state-of-the-art performance already.

Language Modelling Large Language Model

Contrastive Learning with Negative Sampling Correction

no code implementations13 Jan 2024 Lu Wang, Chao Du, Pu Zhao, Chuan Luo, Zhangchi Zhu, Bo Qiao, Wei zhang, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang

To correct the negative sampling bias, we propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL).

Contrastive Learning Data Augmentation +2

From Input to Output: A Multi-layer Knowledge Distillation Framework for Compressing Recommendation Models

no code implementations8 Nov 2023 Zhangchi Zhu, Wei zhang

In this paper, we decompose recommendation models into three layers, i. e., the input layer, the intermediate layer, and the output layer, and address deficiencies layer by layer.

Knowledge Distillation

Robust Positive-Unlabeled Learning via Noise Negative Sample Self-correction

1 code implementation1 Aug 2023 Zhangchi Zhu, Lu Wang, Pu Zhao, Chao Du, Wei zhang, Hang Dong, Bo Qiao, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang

To mitigate the impact of label uncertainty and improve the robustness of learning with positive and unlabeled data, we propose a new robust PU learning method with a training strategy motivated by the nature of human learning: easy cases should be learned first.

Cannot find the paper you are looking for? You can Submit a new open access paper.