no code implementations • 29 Nov 2023 • Zihao Tan, Qingliang Chen, Yongjian Huang, Chen Liang
Most of the existing attack methods focus on inserting manually predefined templates as triggers in the pre-training phase to train the victim model and utilize the same triggers in the downstream task to perform inference, which tends to ignore the transferability and stealthiness of the templates.
no code implementations • 9 Jun 2023 • Zihao Tan, Qingliang Chen, Wenbin Zhu, Yongjian Huang
Prompt-based learning has been proved to be an effective way in pre-trained language models (PLMs), especially in low-resource scenarios like few-shot settings.
no code implementations • 5 Jul 2020 • Yifan Zhang, Maohua Wang, Yongjian Huang, Qianrong Gu
Recent work on segmentation-free word embedding(sembei) developed a new pipeline of word embedding for unsegmentated language while avoiding segmentation as a preprocessing step.