Search Results for author: Yash-yee Logan

Found 4 papers, 2 papers with code

FOCAL: A Cost-Aware Video Dataset for Active Learning

1 code implementation17 Nov 2023 Kiran Kokilepersaud, Yash-yee Logan, Ryan Benkert, Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib, Enrique Corona, Kunjan Singh, Mostafa Parchami

In this paper, we introduce the FOCAL (Ford-OLIVES Collaboration on Active Learning) dataset which enables the study of the impact of annotation-cost within a video active learning setting.

Active Learning

OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics

1 code implementation22 Sep 2022 Mohit Prabhushankar, Kiran Kokilepersaud, Yash-yee Logan, Stephanie Trejo Corona, Ghassan AlRegib, Charles Wykoff

The dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans, and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR or DME.

Time Series Analysis

Patient Aware Active Learning for Fine-Grained OCT Classification

no code implementations23 Jun 2022 Yash-yee Logan, Ryan Benkert, Ahmad Mustafa, Gukyeong Kwon, Ghassan AlRegib

For this purpose, we propose a framework that incorporates clinical insights into the sample selection process of active learning that can be incorporated with existing algorithms.

Active Learning Classification

DECAL: DEployable Clinical Active Learning

no code implementations21 Jun 2022 Yash-yee Logan, Mohit Prabhushankar, Ghassan AlRegib

Hence, active learning techniques that are developed for natural images are insufficient for handling medical data.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.