Search Results for author: Yuping Wu

Found 4 papers, 4 papers with code

Exploring the Value of Pre-trained Language Models for Clinical Named Entity Recognition

2 code implementations23 Oct 2022 Samuel Belkadi, Lifeng Han, Yuping Wu, Goran Nenadic

The experimental outcomes show that 1) CRF layers improved all language models; 2) referring to BIO-strict span level evaluation using macro-average F1 score, although the fine-tuned LLMs achieved 0. 83+ scores, the TransformerCRF model trained from scratch achieved 0. 78+, demonstrating comparable performances with much lower cost - e. g. with 39. 80\% less training parameters; 3) referring to BIO-strict span-level evaluation using weighted-average F1 score, ClinicalBERT-CRF, BERT-CRF, and TransformerCRF exhibited lower score differences, with 97. 59\%/97. 44\%/96. 84\% respectively.

Language Modelling named-entity-recognition +1

EDU-level Extractive Summarization with Varying Summary Lengths

1 code implementation8 Oct 2022 Yuping Wu, Ching-Hsun Tseng, Jiayu Shang, Shengzhong Mao, Goran Nenadic, Xiao-jun Zeng

To fill these gaps, this paper first conducts the comparison analysis of oracle summaries based on EDUs and sentences, which provides evidence from both theoretical and experimental perspectives to justify and quantify that EDUs make summaries with higher automatic evaluation scores than sentences.

Extractive Summarization Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.