no code implementations • 24 Apr 2024 • Cuong Nhat Ha, Shima Asaadi, Sanjeev Kumar Karn, Oladimeji Farri, Tobias Heimann, Thomas Runkler
Vision-language models, while effective in general domains and showing strong performance in diverse multi-modal applications like visual question-answering (VQA), struggle to maintain the same level of effectiveness in more specialized domains, e. g., medical.
no code implementations • 28 Nov 2023 • Ali H. Dhanaliwala, Rikhiya Ghosh, Sanjeev Kumar Karn, Poikavila Ullaskrishnan, Oladimeji Farri, Dorin Comaniciu, Charles E. Kahn
F1 score for extraction was 97% for RadLing-based system and 78% for GPT-4 system.
no code implementations • 18 Jun 2023 • Manuela Daniela Danu, George Marica, Sanjeev Kumar Karn, Bogdan Georgescu, Awais Mansoor, Florin Ghesu, Lucian Mihai Itu, Constantin Suciu, Sasa Grbic, Oladimeji Farri, Dorin Comaniciu
Among all the sub-sections in a typical radiology report, the Clinical Indications, Findings, and Impression often reflect important details about the health status of a patient.
no code implementations • 5 Jun 2023 • Sanjeev Kumar Karn, Rikhiya Ghosh, Kusuma P, Oladimeji Farri
Instruction-tuned generative Large language models (LLMs) like ChatGPT and Bloomz possess excellent generalization abilities, but they face limitations in understanding radiology reports, particularly in the task of generating the IMPRESSIONS section from the FINDINGS section.
no code implementations • 4 Jun 2023 • Rikhiya Ghosh, Sanjeev Kumar Karn, Manuela Daniela Danu, Larisa Micu, Ramya Vunikili, Oladimeji Farri
Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus.
no code implementations • ACL 2022 • Sanjeev Kumar Karn, Ning Liu, Hinrich Schuetze, Oladimeji Farri
A cascade of tasks are required to automatically generate an abstractive summary of the typical information-rich radiology report.
no code implementations • EACL (AdaptNLP) 2021 • Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schuetze
Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions.
no code implementations • 25 Sep 2019 • Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze
The interleaved posts are encoded hierarchically, i. e., word-to-word (words in a post) followed by post-to-post (posts in a channel).
no code implementations • 5 Jun 2019 • Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze
Interleaved texts, where posts belonging to different threads occur in one sequence, are a common occurrence, e. g., online chat conversations.
2 code implementations • NAACL 2019 • Sanjeev Kumar Karn, Mark Buckley, Ulli Waltinger, Hinrich Schütze
In this work, we define the task of teaser generation and provide an evaluation benchmark and baseline systems for the process of generating teasers.