Search Results for author: Daqing He

Found 10 papers, 7 papers with code

Deep Keyphrase Generation

4 code implementations ACL 2017 Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, Yu Chi

Keyphrase provides highly-condensed information that can be effectively used for understanding, organizing and retrieving text content.

Keyphrase Extraction Keyphrase Generation

Does Order Matter? An Empirical Study on Generating Multiple Keyphrases as a Sequence

1 code implementation9 Sep 2019 Rui Meng, Xingdi Yuan, Tong Wang, Peter Brusilovsky, Adam Trischler, Daqing He

Recently, concatenating multiple keyphrases as a target sequence has been proposed as a new learning paradigm for keyphrase generation.

Keyphrase Generation

An Empirical Study on Neural Keyphrase Generation

1 code implementation NAACL 2021 Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, Daqing He

Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them.

Keyphrase Generation

General-to-Specific Transfer Labeling for Domain Adaptable Keyphrase Generation

1 code implementation20 Aug 2022 Rui Meng, Tong Wang, Xingdi Yuan, Yingbo Zhou, Daqing He

Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain.

Keyphrase Generation

Enhancing Automatic ICD-9-CM Code Assignment for Medical Texts with PubMed

no code implementations WS 2017 Danchen Zhang, Daqing He, Sanqiang Zhao, Lei LI

Frequent diseases often have more training data, which helps its classification to perform better than that of an infrequent disease.

Concept Annotation for Intelligent Textbooks

no code implementations22 May 2020 Mengdi Wang, Hung Chau, Khushboo Thaker, Peter Brusilovsky, Daqing He

The outcomes of our work include a validated knowledge engineering procedure, a code-book for technical concept annotation, and a set of concept annotations for the target textbook, which could be used as gold standard in further research.

Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions

no code implementations5 Apr 2024 Zhuochun Li, Bo Xie, Robin Hilsabeck, Alyssa Aguirre, Ning Zou, Zhimeng Luo, Daqing He

Evidence suggests that different prompts lead large language models (LLMs) to generate responses with varying quality.

Cannot find the paper you are looking for? You can Submit a new open access paper.