no code implementations • 29 Nov 2023 • Sonish Sivarajkumar, Pratyush Tandale, Ankit Bhardwaj, Kipp W. Johnson, Anoop Titus, Benjamin S. Glicksberg, Shameer Khader, Kamlesh K. Yadav, Lakshminarayanan Subramanian
We constructed a knowledge graph of 81, 488 unique TF cascades, with the longest cascade consisting of 62 TFs.
no code implementations • 14 Sep 2023 • Sonish Sivarajkumar, Mark Kelley, Alyssa Samolyk-Mazzanti, Shyam Visweswaran, Yanshan Wang
To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.
no code implementations • 5 Jun 2023 • Sonish Sivarajkumar, Yufei Huang, Yanshan Wang
Methods: We defined a new loss function, called weighted loss function, in the deep representation learning model to balance the importance of different groups of patients and features.
no code implementations • 22 Mar 2023 • Sonish Sivarajkumar, Fengyi Gao, Parker E. Denny, Bayan M. Aldhahwani, Shyam Visweswaran, Allyn Bove, Yanshan Wang
Objective: This study aims to develop and evaluate a variety of NLP algorithms to extract and categorize physical rehabilitation exercise information from the clinical notes of post-stroke patients treated at the University of Pittsburgh Medical Center.
no code implementations • 31 Aug 2022 • David Oniani, Sonish Sivarajkumar, Yanshan Wang
Working with smaller annotated datasets is typical in clinical NLP and therefore, ensuring that deep learning models perform well is crucial for the models to be used in real-world applications.
no code implementations • 9 Mar 2022 • Sonish Sivarajkumar, Yanshan Wang
We developed a novel prompt-based clinical NLP framework called HealthPrompt and applied the paradigm of prompt-based learning on clinical texts.
no code implementations • 8 Mar 2022 • Sonish Sivarajkumar, Thomas Yu CHow Tam, Haneef Ahamed Mohammad, Samual Viggiano, David Oniani, Shyam Visweswaran, Yanshan Wang
The results show that the rule-based NLP algorithm consistently achieved the best performance for all sleep concepts.