1 code implementation • 19 Apr 2024 • Biyang Guo, He Wang, Wenyilin Xiao, Hong Chen, Zhuxin Lee, Songqiao Han, Hailiang Huang
In the burgeoning field of Large Language Models (LLMs) like ChatGPT and LLaMA, Prompt Engineering (PE) is renowned for boosting zero-shot or in-context learning (ICL) through prompt modifications.
3 code implementations • 18 Jan 2023 • Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, Yupeng Wu
We call the collected dataset the Human ChatGPT Comparison Corpus (HC3).
2 code implementations • 18 Nov 2022 • Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen
We introduce GENIUS: a conditional text generation model using sketches as input, which can fill in the missing contexts for a given sketch (key information consisting of textual spans, phrases, or words, concatenated by mask tokens).
1 code implementation • 4 Sep 2022 • Biyang Guo, Songqiao Han, Hailiang Huang
Different words may play different roles in text classification, which inspires us to strategically select the proper roles for text augmentation.
no code implementations • 1 Sep 2021 • Biyang Guo, Sonqiao Han, Hailiang Huang
Text augmentation techniques are widely used in text classification problems to improve the performance of classifiers, especially in low-resource scenarios.
1 code implementation • 9 Dec 2020 • Biyang Guo, Songqiao Han, Xiao Han, Hailiang Huang, Ting Lu
LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance.