In order to provide high-quality care, health professionals must efficiently identify the presence, possibility, or absence of symptoms, treatments and other relevant entities in free-text clinical notes.
no code implementations • 29 Jun 2023 • Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge
Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters.
The use of deep neural models for diagnosis prediction from clinical text has shown promising results.
no code implementations • 31 Aug 2022 • Marcos Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro H. Martins, André F. T. Martins, Jessica Zosa Forde, Peter Milder, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Strubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, Roy Schwartz
Recent work in natural language processing (NLP) has yielded appealing results from scaling model parameters and training data; however, using only scale to improve performance means that resource consumption also grows.
Clinical phenotyping enables the automatic extraction of clinical conditions from patient records, which can be beneficial to doctors and clinics worldwide.
We thus introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
Outcome prediction from clinical text can prevent doctors from overlooking possible risks and help hospitals to plan capacities.
Ranked #1 on Medical Diagnosis on Clinical Admission Notes from MIMIC-III (using extra training data)
At the same time, they are difficult to incorporate into the large, black-box models that achieve state-of-the-art results in a multitude of NLP tasks.
Our model leverages a dual encoder architecture with hierarchical LSTM layers and multi-task training to encode the position of clinical entities and aspects alongside the document discourse.
In order to better understand BERT and other Transformer-based models, we present a layer-wise analysis of BERT's hidden states.
Toxic comment classification has become an active research field with many recently proposed approaches.