1 code implementation • 5 Apr 2024 • Amin Dada, Marie Bauer, Amanda Butler Contreras, Osman Alperen Koraş, Constantin Marc Seibold, Kaleb E Smith, Jens Kleesiek
This study investigates the effect of biomedical training in the context of six practical medical tasks evaluating $25$ models.
no code implementations • 19 Mar 2024 • Cheng Peng, Zehao Yu, Kaleb E Smith, Wei-Hsuan Lo-Ciganic, Jiang Bian, Yonghui Wu
The progress in natural language processing (NLP) using large language models (LLMs) has greatly improved patient information extraction from clinical narratives.
no code implementations • 11 Dec 2023 • Cheng Peng, Xi Yang, Aokun Chen, Zehao Yu, Kaleb E Smith, Anthony B Costa, Mona G Flores, Jiang Bian, Yonghui Wu
Objective To solve major clinical natural language processing (NLP) tasks using a unified text-to-text learning architecture based on a generative large language model (LLM) via prompt tuning.
no code implementations • 11 Oct 2023 • Amin Dada, Aokun Chen, Cheng Peng, Kaleb E Smith, Ahmad Idrissi-Yaghir, Constantin Marc Seibold, Jianning Li, Lars Heiliger, Xi Yang, Christoph M. Friedrich, Daniel Truhn, Jan Egger, Jiang Bian, Jens Kleesiek, Yonghui Wu
Traditionally, large language models have been either trained on general web crawls or domain-specific data.
no code implementations • 10 Oct 2023 • Cheng Peng, Xi Yang, Kaleb E Smith, Zehao Yu, Aokun Chen, Jiang Bian, Yonghui Wu
We evaluated the transfer learning ability of the prompt-based learning algorithms in a cross-institution setting.
1 code implementation • 22 May 2023 • Cheng Peng, Xi Yang, Aokun Chen, Kaleb E Smith, Nima PourNejatian, Anthony B Costa, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, Gloria Lipori, Duane A Mitchell, Naykky S Ospina, Mustafa M Ahmed, William R Hogan, Elizabeth A Shenkman, Yi Guo, Jiang Bian, Yonghui Wu
This study provides insights on the opportunities and challenges of LLMs for medical research and healthcare.
no code implementations • 2 Feb 2022 • Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, Christopher A Harle, Gloria Lipori, Duane A Mitchell, William R Hogan, Elizabeth A Shenkman, Jiang Bian, Yonghui Wu
GatorTron models scale up the clinical language model from 110 million to 8. 9 billion parameters and improve 5 clinical NLP tasks (e. g., 9. 6% and 9. 5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery.
Ranked #10 on Zero-Shot Learning on MedConceptsQA
1 code implementation • 30 Jun 2020 • Kaleb E Smith, Anthony O. Smith
It is abundantly clear that time dependent data is a vital source of information in the world.