Search Results for author: Daniel Rubin

Found 29 papers, 11 papers with code

QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization

1 code implementation NAACL (BioNLP) 2021 Jean-Benoit Delbrouck, Cassie Zhang, Daniel Rubin

This paper describes the solution of the QIAI lab sent to the Radiology Report Summarization (RRS) challenge at MEDIQA 2021.

TVAE: Triplet-Based Variational Autoencoder using Metric Learning

2 code implementations13 Feb 2018 Haque Ishfaq, Assaf Hoogi, Daniel Rubin

Deep metric learning has been demonstrated to be highly effective in learning semantic representation and encoding information that can be used to measure data similarity, by relying on the embedding learned from metric learning.

Metric Learning Representation Learning

Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning

1 code implementation CVPR 2022 Liangqiong Qu, Yuyin Zhou, Paul Pu Liang, Yingda Xia, Feifei Wang, Ehsan Adeli, Li Fei-Fei, Daniel Rubin

Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution.

Federated Learning

Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging

1 code implementation17 May 2022 Rui Yan, Liangqiong Qu, Qingyue Wei, Shih-Cheng Huang, Liyue Shen, Daniel Rubin, Lei Xing, Yuyin Zhou

The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing.

Federated Learning Privacy Preserving +2

Deep Active Lesion Segmentation

1 code implementation19 Aug 2019 Ali Hatamizadeh, Assaf Hoogi, Debleena Sengupta, Wuyue Lu, Brian Wilcox, Daniel Rubin, Demetri Terzopoulos

Lesion segmentation is an important problem in computer-assisted diagnosis that remains challenging due to the prevalence of low contrast, irregular boundaries that are unamenable to shape priors.

Lesion Segmentation Segmentation

Multimodal spatiotemporal graph neural networks for improved prediction of 30-day all-cause hospital readmission

1 code implementation14 Apr 2022 Siyi Tang, Amara Tariq, Jared Dunnmon, Umesh Sharma, Praneetha Elugunti, Daniel Rubin, Bhavik N. Patel, Imon Banerjee

Measures to predict 30-day readmission are considered an important quality factor for hospitals as accurate predictions can reduce the overall cost of care by identifying high risk patients before they are discharged.

Readmission Prediction

ATCON: Attention Consistency for Vision Models

1 code implementation18 Oct 2022 Ali Mirzazadeh, Florian Dubost, Maxwell Pike, Krish Maniar, Max Zuo, Christopher Lee-Messer, Daniel Rubin

We propose an unsupervised fine-tuning method that optimizes the consistency of attention maps and show that it improves both classification performance and the quality of attention maps.

Event Detection

Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays

1 code implementation30 Jan 2023 Rogier van der Sluijs, Nandita Bhaskhar, Daniel Rubin, Curtis Langlotz, Akshay Chaudhari

Thus, it is unknown whether common augmentation strategies employed in Siamese representation learning generalize to medical images and to what extent.

Anomaly Detection Representation Learning +1

Towards trustworthy seizure onset detection using workflow notes

1 code implementation14 Jun 2023 Khaled Saab, Siyi Tang, Mohamed Taha, Christopher Lee-Messer, Christopher Ré, Daniel Rubin

We find that our multilabel model significantly improves overall seizure onset detection performance (+5. 9 AUROC points) while greatly improving performance among subgroups (up to +8. 3 AUROC points), and decreases false positives on non-epileptiform abnormalities by 8 FPR points.

EEG

A Deep-learning Approach for Prognosis of Age-Related Macular Degeneration Disease using SD-OCT Imaging Biomarkers

no code implementations27 Feb 2019 Imon Banerjee, Luis de Sisternes, Joelle Hallak, Theodore Leng, Aaron Osborne, Mary Durbin, Daniel Rubin

We propose a hybrid sequential deep learning model to predict the risk of AMD progression in non-exudative AMD eyes at multiple timepoints, starting from short-term progression (3-months) up to long-term progression (21-months).

Deep Learning for Prostate Pathology

no code implementations11 Oct 2019 Okyaz Eminaga, Yuri Tolkach, Christian Kunder, Mahmood Abbas, Ryan Han, Rosalie Nolley, Axel Semjonow, Martin Boegemann, Sebastian Huss, Andreas Loening, Robert West, Geoffrey Sonn, Richard Fan, Olaf Bettendorf, James Brook, Daniel Rubin

For case usage, these models were applied for the annotation tasks in clinician-oriented pathology reports for prostatectomy specimens.

MRI Pulse Sequence Integration for Deep-Learning Based Brain Metastasis Segmentation

no code implementations18 Dec 2019 Darvin Yi, Endre Grøvik, Michael Iv, Elizabeth Tong, Kyrre Eeg Emblem, Line Brennhaug Nilsen, Cathrine Saxhaug, Anna Latysheva, Kari Dolven Jacobsen, Åslaug Helland, Greg Zaharchuk, Daniel Rubin

We illustrate not only the generalizability of the network but also the utility of this robustness when applying the trained model to data from a different center, which does not use the same pulse sequences.

Small Data Image Classification

Handling Missing MRI Input Data in Deep Learning Segmentation of Brain Metastases: A Multi-Center Study

no code implementations27 Dec 2019 Endre Grøvik, Darvin Yi, Michael Iv, Elizabeth Tong, Line Brennhaug Nilsen, Anna Latysheva, Cathrine Saxhaug, Kari Dolven Jacobsen, Åslaug Helland, Kyrre Eeg Emblem, Daniel Rubin, Greg Zaharchuk

A deep learning based segmentation model for automatic segmentation of brain metastases, named DropOut, was trained on multi-sequence MRI from 100 patients, and validated/tested on 10/55 patients.

Segmentation

Brain Metastasis Segmentation Network Trained with Robustness to Annotations with Multiple False Negatives

no code implementations MIDL 2019 Darvin Yi, Endre Grøvik, Michael Iv, Elizabeth Tong, Greg Zaharchuk, Daniel Rubin

Even with a simulated false negative rate as high as 50%, applying our loss function to randomly censored data preserves maximum sensitivity at 97% of the baseline with uncensored training data, compared to just 10% for a standard loss function.

Random Bundle: Brain Metastases Segmentation Ensembling through Annotation Randomization

no code implementations23 Feb 2020 Darvin Yi, Endre Grøvik, Michael Iv, Elizabeth Tong, Greg Zaharchuk, Daniel Rubin

We introduce a novel ensembling method, Random Bundle (RB), that improves performance for brain metastases segmentation.

Segmentation

Semi-Supervised Learning for Sparsely-Labeled Sequential Data: Application to Healthcare Video Processing

1 code implementation28 Nov 2020 Florian Dubost, Erin Hong, Nandita Bhaskhar, Siyi Tang, Daniel Rubin, Christopher Lee-Messer

We propose a semi-supervised machine learning training strategy to improve event detection performance on sequential data, such as video recordings, when only sparse labels are available, such as event start times without their corresponding end times.

BIG-bench Machine Learning Electroencephalogram (EEG) +2

Handling Data Heterogeneity with Generative Replay in Collaborative Learning for Medical Imaging

no code implementations24 Jun 2021 Liangqiong Qu, Niranjan Balachandar, Miao Zhang, Daniel Rubin

Specifically, instead of directly training a model for task performance, we develop a novel dual model architecture: a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogenous clients.

Image Generation Privacy Preserving

TIME-LAPSE: Learning to say “I don't know” through spatio-temporal uncertainty scoring

no code implementations29 Sep 2021 Nandita Bhaskhar, Daniel Rubin, Christopher Lee-Messer

We show that TIME-LAPSE is more driven by semantic content compared to other methods, i. e., it is more robust to dataset statistics.

EEG Electroencephalogram (EEG) +2

RadFusion: Benchmarking Performance and Fairness for Multimodal Pulmonary Embolism Detection from CT and EHR

no code implementations23 Nov 2021 Yuyin Zhou, Shih-Cheng Huang, Jason Alan Fries, Alaa Youssef, Timothy J. Amrhein, Marcello Chang, Imon Banerjee, Daniel Rubin, Lei Xing, Nigam Shah, Matthew P. Lungren

Despite the routine use of electronic health record (EHR) data by radiologists to contextualize clinical history and inform image interpretation, the majority of deep learning architectures for medical imaging are unimodal, i. e., they only learn features from pixel-level information.

Benchmarking Computed Tomography (CT) +2

Improving Sample Complexity with Observational Supervision

no code implementations ICLR Workshop LLD 2019 Khaled Saab, Jared Dunnmon, Alexander Ratner, Daniel Rubin, Christopher Re

Supervised machine learning models for high-value computer vision applications such as medical image classification often require large datasets labeled by domain experts, which are slow to collect, expensive to maintain, and static with respect to changes in the data distribution.

Image Classification Medical Image Classification

Automated Detection of Patients in Hospital Video Recordings

no code implementations28 Nov 2021 Siddharth Sharma, Florian Dubost, Christopher Lee-Messer, Daniel Rubin

We evaluate an ImageNet pre-trained Mask R-CNN, a standard deep learning model for object detection, on the task of patient detection using our own curated dataset of 45 videos of hospital patients.

EEG Electroencephalogram (EEG) +2

Masked Co-attentional Transformer reconstructs 100x ultra-fast/low-dose whole-body PET from longitudinal images and anatomically guided MRI

no code implementations9 May 2022 Yan-Ran, Wang, Liangqiong Qu, Natasha Diba Sheybani, Xiaolong Luo, Jiangshan Wang, Kristina Elizabeth Hawk, Ashok Joseph Theruvath, Sergios Gatidis, Xuerong Xiao, Allison Pribnow, Daniel Rubin, Heike E. Daldrup-Link

In this study, we utilize the global similarity between baseline and follow-up PET and magnetic resonance (MR) images to develop Masked-LMCTrans, a longitudinal multi-modality co-attentional CNN-Transformer that provides interaction and joint reasoning between serial PET/MRs of the same patient.

The Importance of Background Information for Out of Distribution Generalization

no code implementations17 Jun 2022 Jupinder Parmar, Khaled Saab, Brian Pogatchnik, Daniel Rubin, Christopher Ré

Domain generalization in medical image classification is an important problem for trustworthy machine learning to be deployed in healthcare.

Domain Generalization Image Classification +3

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

no code implementations24 Aug 2022 Minhaj Nur Alam, Rikiya Yamashita, Vignav Ramesh, Tejas Prabhune, Jennifer I. Lim, R. V. P. Chan, Joelle Hallak, Theodore Leng, Daniel Rubin

CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.

Contrastive Learning Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.