1 code implementation • NAACL (ClinicalNLP) 2022 • Henning Schäfer, Ahmad Idrissi-Yaghir, Peter Horn, Christoph Friedrich
In this work, cross-linguistic span prediction based on contextualized word embedding models is used together with neural machine translation (NMT) to transfer and apply the state-of-the-art models in natural language processing (NLP) to a low-resource language clinical corpus.
no code implementations • 20 May 2024 • Tabea M. G. Pakull, Hendrik Damm, Ahmad Idrissi-Yaghir, Henning Schäfer, Peter A. Horn, Christoph M. Friedrich
Out of 54 participants, the WisPerMed team reached the 4th place, measured by readability, factuality, and relevance.
no code implementations • 18 May 2024 • Hendrik Damm, Tabea M. G. Pakull, Bahadır Eryılmaz, Helmut Becker, Ahmad Idrissi-Yaghir, Henning Schäfer, Sergej Schultenkämper, Christoph M. Friedrich
Various strategies were employed, including few-shot learning, instruction tuning, and Dynamic Expert Selection (DES), to develop models capable of generating the required text sections.
1 code implementation • 16 May 2024 • Johannes Rückert, Louise Bloch, Raphael Brüngel, Ahmad Idrissi-Yaghir, Henning Schäfer, Cynthia S. Schmidt, Sven Koitka, Obioma Pelka, Asma Ben Abacha, Alba G. Seco de Herrera, Henning Müller, Peter A. Horn, Felix Nensa, Christoph M. Friedrich
The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image.
no code implementations • 8 Apr 2024 • Ahmad Idrissi-Yaghir, Amin Dada, Henning Schäfer, Kamyar Arzideh, Giulia Baldini, Jan Trienes, Max Hasin, Jeanette Bewersdorff, Cynthia S. Schmidt, Marie Bauer, Kaleb E. Smith, Jiang Bian, Yonghui Wu, Jörg Schlötterer, Torsten Zesch, Peter A. Horn, Christin Seifert, Felix Nensa, Jens Kleesiek, Christoph M. Friedrich
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa.
no code implementations • 12 Dec 2022 • Ahmad Idrissi-Yaghir, Henning Schäfer, Nadja Bauer, Christoph M. Friedrich
For the subtask Relevance Classification, the best models achieve a micro-averaged $F1$-Score of 96. 1 % on the first test set and 95. 9 % on the second one, and a score of 85. 1 % and 85. 3 % for the subtask Polarity Classification.