Search Results for author: Biyang Guo

Found 6 papers, 5 papers with code

Label Confusion Learning to Enhance Text Classification Models

1 code implementation9 Dec 2020 Biyang Guo, Songqiao Han, Xiao Han, Hailiang Huang, Ting Lu

LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance.

General Classification text-classification +1

What Have Been Learned & What Should Be Learned? An Empirical Study of How to Selectively Augment Text for Classification

no code implementations1 Sep 2021 Biyang Guo, Sonqiao Han, Hailiang Huang

Text augmentation techniques are widely used in text classification problems to improve the performance of classifiers, especially in low-resource scenarios.

Text Augmentation text-classification +1

Selective Text Augmentation with Word Roles for Low-Resource Text Classification

1 code implementation4 Sep 2022 Biyang Guo, Songqiao Han, Hailiang Huang

Different words may play different roles in text classification, which inspires us to strategically select the proper roles for text augmentation.

Language Modelling Large Language Model +5

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

2 code implementations18 Nov 2022 Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen

We introduce GENIUS: a conditional text generation model using sketches as input, which can fill in the missing contexts for a given sketch (key information consisting of textual spans, phrases, or words, concatenated by mask tokens).

Conditional Text Generation Data Augmentation +8

Sample Design Engineering: An Empirical Study of What Makes Good Downstream Fine-Tuning Samples for LLMs

1 code implementation19 Apr 2024 Biyang Guo, He Wang, Wenyilin Xiao, Hong Chen, Zhuxin Lee, Songqiao Han, Hailiang Huang

In the burgeoning field of Large Language Models (LLMs) like ChatGPT and LLaMA, Prompt Engineering (PE) is renowned for boosting zero-shot or in-context learning (ICL) through prompt modifications.

Event Extraction In-Context Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.