1 code implementation • 18 Feb 2025 • Wenlong Meng, Zhenyuan Guo, Lenan Wu, Chen Gong, Wenyan Liu, Weixian Li, Chengkun Wei, Wenzhi Chen
In the second stage, we design a new criterion to score each PII candidate and rank them.
no code implementations • 13 Dec 2024 • Kai Yao, Zhaorui Tan, Tiandi Ye, Lichun Li, Yuan Zhao, Wenyan Liu, Wei Wang, Jianke Zhu
Offsite-tuning is a privacy-preserving method for tuning large language models (LLMs) by sharing a lossy compressed emulator from the LLM owners with data owners for downstream task tuning.
no code implementations • 19 Jun 2024 • Kangtong Mo, Wenyan Liu, Xuanzhen Xu, Chang Yu, Yuelin Zou, Fangqing Xia
In this study, we explore the application of sentiment analysis on financial news headlines to understand investor sentiment.
no code implementations • 14 Aug 2022 • Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li
In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.
1 code implementation • 9 Feb 2022 • Moyi Yang, Junjie Sheng, Xiangfeng Wang, Wenyan Liu, Bo Jin, Jun Wang, Hongyuan Zha
Fairness has been taken as a critical metric in machine learning models, which is considered as an important component of trustworthy machine learning.
no code implementations • 1 Jan 2021 • Wenyan Liu, Xiangfeng Wang, Xingjian Lu, Junhong Cheng, Bo Jin, Xiaoling Wang, Hongyuan Zha
This paper proposes a fair differential privacy algorithm (FairDP) to mitigate the disparate impact on model accuracy for each class.