Fine-Tuning from Limited Feedbacks

29 Sep 2021  ·  Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Ivor Tsang ·

Instead of learning from scratch, fine-tuning a pre-trained model to fit a related target dataset of interest or downstream tasks has been a handy trick to achieve the desired performance. However, according to the study of~\cite{song2017machine}, standard fine-tuning may expose the information about target data if the pre-trained model is supplied by a malicious provider. Instead of reckoning that data holders are always expert to select reliable models and execute fine-tuning themselves, this paper confronts this problem by exploring a new learning paradigm named Fine-Tuning from limited FeedBacks (FTFB). The appealing trait of FTFB is that the model tuning does not require directly seeing the target data but leveraging the model performances as feedbacks instead. To learn from query-feedback, we propose to fine-tune the pre-trained model on the parameter distribution with the gradient descent scheme. For the deep models whose tuning parameters distribute across multiple layers, a more query-efficient algorithm is further designed which refines the model layer by layer sequentially with importance weight. Extensive experiments on various tasks demonstrate that the proposed algorithms significantly improve the pre-trained model with limited feedbacks only. For downstream tasks which adopt inconsistent evaluation measurement with pre-training, such as fairness or fault-intolerance, we verify our algorithms can also reach good performance.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here