Search Results for author: Zhihao Ye

Found 5 papers, 2 papers with code

MCSCSet: A Specialist-annotated Dataset for Medical-domain Chinese Spelling Correction

1 code implementation21 Oct 2022 Wangjie Jiang, Zhihao Ye, Zijing Ou, Ruihui Zhao, Jianguang Zheng, Yi Liu, Siheng Li, Bang Liu, Yujiu Yang, Yefeng Zheng

In this work, we define the task of Medical-domain Chinese Spelling Correction and propose MCSCSet, a large scale specialist-annotated dataset that contains about 200k samples.

Optical Character Recognition Optical Character Recognition (OCR) +1

AdaptSSR: Pre-training User Model with Augmentation-Adaptive Self-Supervised Ranking

1 code implementation NeurIPS 2023 Yang Yu, Qi Liu, Kai Zhang, Yuren Zhang, Chao Song, Min Hou, Yuqing Yuan, Zhihao Ye, Zaixi Zhang, Sanshi Lei Yu

Specifically, we adopt a multiple pairwise ranking loss which trains the user model to capture the similarity orders between the implicitly augmented view, the explicitly augmented view, and views from other users.

Contrastive Learning Data Augmentation

Detection of Propaganda Using Logistic Regression

no code implementations WS 2019 Jinfen Li, Zhihao Ye, Lu Xiao

Various propaganda techniques are used to manipulate peoples perspectives in order to foster a predetermined agenda such as by the use of logical fallacies or appealing to the emotions of the audience.

Logical Fallacies regression +1

A Framework to Implement 1+N Multi-task Fine-tuning Pattern in LLMs Using the CGC-LORA Algorithm

no code implementations22 Jan 2024 Chao Song, Zhihao Ye, Qiqiang Lin, Qiuying Peng, Jun Wang

In practice, there are two prevailing ways, in which the adaptation can be achieved: (i) Multiple Independent Models: Pre-trained LLMs are fine-tuned a few times independently using the corresponding training samples from each task.

Cannot find the paper you are looking for? You can Submit a new open access paper.