We extend our BERT-based approach to the entity linking task.
Social media posts contain potentially valuable information about medical conditions and health-related behavior.
In Track-1 of the BioCreative VII Challenge participants are asked to identify interactions between drugs/chemicals and proteins.
There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books.
Ranked #1 on Named Entity Recognition on BC5CDR-disease
Synthesizing medical images, such as PET, is a challenging task due to the fact that the intensity range is much wider and denser than those in photographs and digital renderings and are often heavily biased toward zero.
no code implementations • 10 Aug 2020 • Hoo-chang Shin, Alvin Ihsani, Ziyue Xu, Swetha Mandava, Sharath Turuvekere Sreenivas, Christopher Forster, Jiook Cha, Alzheimer's Disease Neuroimaging Initiative
This paper proposes an alternative approach to the aforementioned, where AD diagnosis is incorporated in the GAN training objective to achieve the best AD classification performance.
In this work, we investigate the potential of an end-to-end method fusing gene code with image features to generate synthetic pathology image and learn radiogenomic map simultaneously.
Radiogenomic map linking image features and gene expression profiles is useful for noninvasively identifying molecular properties of a particular type of disease.
Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models.
The recent rapid and tremendous success of deep convolutional neural networks (CNN) on many challenging computer vision tasks largely derives from the accessibility of the well-annotated ImageNet and PASCAL VOC datasets.
Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features.
Obtaining semantic labels on a large scale radiology image database (215, 786 key images from 61, 845 unique patients) is a prerequisite yet bottleneck to train highly effective deep convolutional neural network (CNN) models for image recognition.
Another effective method is transfer learning, i. e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks.
We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i. e. superpixels.
We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's picture archiving and communication system.
We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's Picture Archiving and Communication System.
We show that a data augmentation approach can help to enrich the data set and improve classification performance.