no code implementations • 2 Feb 2022 • Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, Christopher A Harle, Gloria Lipori, Duane A Mitchell, William R Hogan, Elizabeth A Shenkman, Jiang Bian, Yonghui Wu
GatorTron models scale up the clinical language model from 110 million to 8. 9 billion parameters and improve 5 clinical NLP tasks (e. g., 9. 6% and 9. 5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery.
Ranked #10 on Zero-Shot Learning on MedConceptsQA