no code implementations • 8 May 2024 • Yikuan Xia, Jiazun Chen, Xinchi Li, Jun Gao
The first is an augmented contextualized soft token-based prompt tuning method that extracts a guiding soft token benefit for the PLMs' prompt tuning, and the second is a cost-effective information augmentation strategy leveraging large language models (LLMs).