Our paper focuses on automating the generation of medical reports from chest X-ray image inputs, a critical yet time-consuming task for radiologists.
In this work, we address the problem of cross-view geo-localization, which estimates the geospatial location of a street view image by matching it with a database of geo-tagged aerial images.
no code implementations • 9 Nov 2020 • Qingyu Chen, Tiarnan D. L. Keenan, Alexis Allot, Yifan Peng, Elvira Agrón, Amitha Domalpally, Caroline C. W. Klaver, Daniel T. Luttikhuizen, Marcus H. Colyer, Catherine A. Cukras, Henry E. Wiley, M. Teresa Magone, Chantal Cousineau-Krieger, Wai T. Wong, Yingying Zhu, Emily Y. Chew, Zhiyong Lu
The objective was to develop and evaluate the performance of a novel 'M3' deep learning framework on RPD detection.
We introduce an edge prediction module in E$^2$Net and design an edge distance map between liver and tumor boundaries, which is used as an extra supervision signal to train the edge enhanced network.
We expect the utility of our framework will extend to other problems beyond segmentation due to the improved quality of the generated images and enhanced ability to preserve small structures.
(1) We show that COVID-19-CT-CXR, when used as additional training data, is able to contribute to improved DL performance for the classification of COVID-19 and non-COVID-19 CT. (2) We collected CT images of influenza and trained a DL baseline to distinguish a diagnosis of COVID-19, influenza, or normal or other types of diseases on CT. (3) We trained an unsupervised one-class classifier from non-COVID-19 CXR and performed anomaly detection to detect COVID-19 CXR.
In medical imaging applications, preserving small structures is important since these structures can carry information which is highly relevant for disease diagnosis.
The task of cross-view image geo-localization aims to determine the geo-location (GPS coordinates) of a query ground-view image by matching it with the GPS-tagged aerial (satellite) images in a reference dataset.
Visual place recognition is challenging in the urban environment and is usually viewed as a large scale image retrieval task.
An additional layer of complexity is that, in real life, the amount and type of data available for each patient can differ significantly.
The task of estimating complex non-rigid 3D motion through a monocular camera is of increasing interest to the wider scientific community.
This is motivated from the observations that the activities related in space and time rarely occur independently and can serve as the context for each other.