Lesion segmentation in medical imaging has been an important topic in clinical research.
Both the classification and localization are trained in conjunction and once trained, the model can be utilized for both the localization and characterization of pneumonia using only the input image.
Building robust deep learning-based models requires diverse training data, ideally from several sources.
In this work, we introduce an image-text pre-training framework that can learn from these raw data with mixed data inputs, i. e., paired image-text data, a mixture of paired and unpaired data.
To stimulate COVID-19 research, we release an influenza clinical trials and antiviral analogies dataset used in conjunction with the COVID-19 Open Research Dataset Challenge (CORD-19) literature dataset in the study.
no code implementations • 23 Nov 2020 • Dong Yang, Ziyue Xu, Wenqi Li, Andriy Myronenko, Holger R. Roth, Stephanie Harmon, Sheng Xu, Baris Turkbey, Evrim Turkbey, Xiaosong Wang, Wentao Zhu, Gianpaolo Carrafiello, Francesca Patella, Maurizio Cariati, Hirofumi Obinata, Hitoshi Mori, Kaku Tamura, Peng An, Bradford J. Wood, Daguang Xu
To facilitate CT analysis, recent efforts have focused on computer-aided characterization and diagnosis, which has shown promising results.
Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation.
We apply the attention-on-label scheme on the classification task of a synthetic noisy CIFAR-10 dataset to prove the concept, and then demonstrate superior results (3-5% increase on average in multiple disease classification AUCs) on the chest x-ray images from a hospital-scale dataset (MIMIC-CXR) and hand-labeled dataset (OpenI) in comparison to regular training paradigms.
The architectural modifications address three obstacles -- implementing a supervised vision and language detection method in a weakly supervised fashion, incorporating clinical referring expression natural language information, and generating high fidelity detections with map probabilities.
no code implementations • 10 Jul 2020 • Liyue Shen, Wentao Zhu, Xiaosong Wang, Lei Xing, John M. Pauly, Baris Turkbey, Stephanie Anne Harmon, Thomas Hogue Sanford, Sherif Mehralivand, Peter Choyke, Bradford Wood, Daguang Xu
Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e. g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI).
Object segmentation plays an important role in the modern medical image analysis, which benefits clinical study, disease diagnosis, and surgery planning.
In addition, we proposed a new evaluation metric for radiology image reporting with the assistance of the same composed graph.
In this work, we investigate the potential of an end-to-end method fusing gene code with image features to generate synthetic pathology image and learn radiogenomic map simultaneously.
Here, we propose to use minimal user interaction in the form of extreme point clicks in order to train a segmentation model that can, in turn, be used to speed up the annotation of medical images.
Radiogenomic map linking image features and gene expression profiles is useful for noninvasively identifying molecular properties of a particular type of disease.
We rethink data augmentation for medical 3D images and propose a deep stacked transformations (DST) approach for domain generalization.
Results validate that the ST-ConvLSTM produces a Dice score of 83. 2%+-5. 1% and a RVD of 11. 2%+-10. 8%, both significantly outperforming (p<0. 05) other compared methods of linear model, ConvLSTM, and generative adversarial network (GAN) under the metric of predicting future tumor volumes.
In addition, highly confident samples (measured by classification probabilities) and their corresponding class-conditional heatmaps (generated by the CNN) are extracted and further fed into the AGCL framework to guide the learning of more distinctive convolutional features in the next iteration.
Chest X-rays are one of the most common radiological examinations in daily clinical routines.
Negative and uncertain medical findings are frequent in radiology reports, but discriminating them from positive findings remains challenging for information extraction.
Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure.
We categorize the collection of lesions using an unsupervised deep mining scheme to generate clustered pseudo lesion labels.
The chest X-ray is one of the most commonly accessible radiological examinations for screening and diagnosis of many lung diseases.
The recent rapid and tremendous success of deep convolutional neural networks (CNN) on many challenging computer vision tasks largely derives from the accessibility of the well-annotated ImageNet and PASCAL VOC datasets.
Obtaining semantic labels on a large scale radiology image database (215, 786 key images from 61, 845 unique patients) is a prerequisite yet bottleneck to train highly effective deep convolutional neural network (CNN) models for image recognition.