no code implementations • 17 Feb 2024 • Shaochen Xu, Zihao Wu, Huaqin Zhao, Peng Shu, Zhengliang Liu, Wenxiong Liao, Sheng Li, Andrea Sikora, Tianming Liu, Xiang Li
In this study, we leverage LLM to enhance the semantic analysis and develop similarity metrics for texts, addressing the limitations of traditional unsupervised NLP metrics like ROUGE and BLEU.
1 code implementation • 19 Jan 2024 • Zhengliang Liu, Jason Holmes, Wenxiong Liao, Chenbin Liu, Lian Zhang, Hongying Feng, Peilong Wang, Muhammad Ali Elahi, Hongmin Cai, Lichao Sun, Quanzheng Li, Xiang Li, Tianming Liu, Jiajian Shen, Wei Liu
ROND is specifically designed to address this gap in the domain of radiation oncology, a field that offers many opportunities for NLP exploration.
1 code implementation • 5 Jul 2023 • Hongmin Cai, Xiaoke Huang, Zhengliang Liu, Wenxiong Liao, Haixing Dai, Zihao Wu, Dajiang Zhu, Hui Ren, Quanzheng Li, Tianming Liu, Xiang Li
As AD impairs the patient's language understanding and expression ability, the speech of AD patients can serve as an indicator of this disease.
no code implementations • 23 Apr 2023 • Wenxiong Liao, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Dajiang Zhu, Hongmin Cai, Tianming Liu, Xiang Li
We focus on analyzing the differences between medical texts written by human experts and generated by ChatGPT, and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT.
no code implementations • 25 Feb 2023 • Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, Xiang Li
Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks.
no code implementations • 21 Feb 2023 • Wenxiong Liao, Zhengliang Liu, Haixing Dai, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Yuzhong Chen, Xi Jiang, Wei Liu, Dajiang Zhu, Tianming Liu, Sheng Li, Xiang Li, Hongmin Cai
The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting.
no code implementations • 5 Nov 2022 • Hongmin Cai, Wenxiong Liao, Zhengliang Liu, Yiyang Zhang, Xiaoke Huang, Siqi Ding, Hui Ren, Zihao Wu, Haixing Dai, Sheng Li, Lingfei Wu, Ninghao Liu, Quanzheng Li, Tianming Liu, Xiang Li
In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation.
no code implementations • 9 Feb 2021 • Pengfei Wei, Bi Zeng, Wenxiong Liao
In this paper, we propose a new joint model with a wheel-graph attention network (Wheel-GAT) which is able to model interrelated connections directly for intent detection and slot filling.