no code implementations • 23 Apr 2024 • Xiongxiao Xu, Yueqing Liang, Baixiang Huang, Zhiling Lan, Kai Shu
In this paper, we propose to leverage a hybrid framework Mambaformer that internally combines Mamba for long-range dependency, and Transformer for short range dependency, for long-short range forecasting.
no code implementations • 14 Feb 2024 • Chen Wang, Fangxin Wang, Ruocheng Guo, Yueqing Liang, Kay Liu, Philip S. Yu
Recognizing the critical role of confidence in aligning training objectives with evaluation metrics, we propose CPFT, a versatile framework that enhances recommendation confidence by integrating Conformal Prediction (CP)-based losses with CE loss during fine-tuning.
no code implementations • 15 Nov 2023 • Yueqing Liang, Lu Cheng, Ali Payani, Kai Shu
This work investigates the potential of undermining both fairness and detection performance in abusive language detection.
no code implementations • 13 Oct 2023 • Chen Wang, Liangwei Yang, Zhiwei Liu, Xiaolong Liu, Mingdai Yang, Yueqing Liang, Philip S. Yu
However, PLMs often overlook the vital collaborative filtering signals, leading to challenges in merging collaborative and semantic representation spaces and fine-tuning semantic representations for better alignment with warm-start conditions.
no code implementations • 18 Jul 2022 • Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu
Thus, it is essential to ensure fairness in machine learning models.
no code implementations • 8 Jun 2022 • Yueqing Liang, Canyu Chen, Tian Tian, Kai Shu
Though we lack the sensitive attribute for training a fair model in the target domain, there might exist a similar domain that has sensitive attributes.
no code implementations • 16 Nov 2021 • Chen Wang, Yueqing Liang, Zhiwei Liu, Tao Zhang, Philip S. Yu
Then, we transfer the pre-trained graph encoder to initialize the node embeddings on the target domain, which benefits the fine-tuning of the single domain recommender system on the target domain.