Search Results for author: Wenyan Liu

Found 6 papers, 2 papers with code

ScaleOT: Privacy-utility-scalable Offsite-tuning with Dynamic LayerReplace and Selective Rank Compression

no code implementations13 Dec 2024 Kai Yao, Zhaorui Tan, Tiandi Ye, Lichun Li, Yuan Zhao, Wenyan Liu, Wei Wang, Jianke Zhu

Offsite-tuning is a privacy-preserving method for tuning large language models (LLMs) by sharing a lossy compressed emulator from the LLM owners with data owners for downstream task tuning.

Knowledge Distillation Privacy Preserving

Fine-Tuning Gemma-7B for Enhanced Sentiment Analysis of Financial News Headlines

no code implementations19 Jun 2024 Kangtong Mo, Wenyan Liu, Xuanzhen Xu, Chang Yu, Yuelin Zou, Fangqing Xia

In this study, we explore the application of sentiment analysis on financial news headlines to understand investor sentiment.

Management Sentiment Analysis +1

Forgetting Fast in Recommender Systems

no code implementations14 Aug 2022 Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li

In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.

Machine Unlearning Recommendation Systems

Obtaining Dyadic Fairness by Optimal Transport

1 code implementation9 Feb 2022 Moyi Yang, Junjie Sheng, Xiangfeng Wang, Wenyan Liu, Bo Jin, Jun Wang, Hongyuan Zha

Fairness has been taken as a critical metric in machine learning models, which is considered as an important component of trustworthy machine learning.

Fairness Link Prediction

Fair Differential Privacy Can Mitigate the Disparate Impact on Model Accuracy

no code implementations1 Jan 2021 Wenyan Liu, Xiangfeng Wang, Xingjian Lu, Junhong Cheng, Bo Jin, Xiaoling Wang, Hongyuan Zha

This paper proposes a fair differential privacy algorithm (FairDP) to mitigate the disparate impact on model accuracy for each class.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.