1 code implementation • 3 May 2023 • Jiazhao Li, Zhuofeng Wu, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran
Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training.
no code implementations • 27 Apr 2023 • Jiazhao Li, Yijin Yang, Zhuofeng Wu, V. G. Vinod Vydiswaran, Chaowei Xiao
Textual backdoor attacks pose a practical threat to existing systems, as they can compromise the model by inserting imperceptible triggers into inputs and manipulating labels in the training dataset.
no code implementations • NAACL 2022 • Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, Hao Ma
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiazhao Li, Corey Lester, Xinyan Zhao, Yuting Ding, Yun Jiang, V. G. Vinod Vydiswaran
We propose a novel machine translation-based approach, PharmMT, to automatically and reliably simplify prescription directions into patient-friendly language, thereby significantly reducing pharmacist workload.
1 code implementation • 16 Dec 2020 • Xinyan Zhao, V. G. Vinod Vydiswaran
Natural language explanations (NLEs) are a special form of data annotation in which annotators identify rationales (most significant text tokens) when assigning labels to data instances, and write out explanations for the labels in natural language based on the rationales.