Search Results for author: Jin Long

Found 5 papers, 3 papers with code

Critical Evaluation of Artificial Intelligence as Digital Twin of Pathologist for Prostate Cancer Pathology

no code implementations23 Aug 2023 Okyaz Eminaga, Mahmoud Abbas, Christian Kunder, Yuri Tolkach, Ryan Han, James D. Brooks, Rosalie Nolley, Axel Semjonow, Martin Boegemann, Robert West, Jin Long, Richard Fan, Olaf Bettendorf

Adjusting the decision threshold for the secondary Gleason pattern from 5% to 10% improved the concordance level between pathologists and vPatho for tumor grading on prostatectomy specimens (kappa from 0. 44 to 0. 64).

CheXstray: Real-time Multi-Modal Data Concordance for Drift Detection in Medical Imaging AI

1 code implementation6 Feb 2022 Arjun Soin, Jameson Merkow, Jin Long, Joseph Paul Cohen, Smitha Saligrama, Stephen Kaiser, Steven Borg, Ivan Tarapov, Matthew P Lungren

We use the CheXpert and PadChest public datasets to build and test a medical imaging AI drift monitoring workflow to track data and model drift without contemporaneous ground truth.

CheXbreak: Misclassification Identification for Deep Learning Models Interpreting Chest X-rays

no code implementations18 Mar 2021 Emma Chen, Andy Kim, Rayan Krishnan, Jin Long, Andrew Y. Ng, Pranav Rajpurkar

A major obstacle to the integration of deep learning models for chest x-ray interpretation into clinical settings is the lack of understanding of their failure modes.

Learning domain-agnostic visual representation for computational pathology using medically-irrelevant style transfer augmentation

1 code implementation2 Feb 2021 Rikiya Yamashita, Jin Long, Snikitha Banda, Jeanne Shen, Daniel L. Rubin

Although various methods such as domain adaptation and domain generalization have evolved to combat this challenge, learning robust and generalizable representations is core to medical image understanding, and continues to be a problem.

Data Augmentation Domain Generalization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.