no code implementations • 5 Nov 2023 • Eric Zimmermann, Justin Szeto, Jerome Pasquero, Frederic Ratle
Benchmark datasets are used to profile and compare algorithms across a variety of tasks, ranging from image classification to segmentation, and also play a large role in image pretraining algorithms.
no code implementations • 5 Nov 2023 • Eric Zimmermann, Justin Szeto, Frederic Ratle
Polygons are a common annotation format used for quickly annotating objects in instance segmentation tasks.
no code implementations • 4 Jul 2023 • Changjian Shui, Justin Szeto, Raghav Mehta, Douglas L. Arnold, Tal Arbel
However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model.
no code implementations • 31 Oct 2022 • Brennan Nichyporuk, Jillian Cardinell, Justin Szeto, Raghav Mehta, Jean-Pierre R. Falet, Douglas L. Arnold, Sotirios A. Tsaftaris, Tal Arbel
This is particularly important in the context of medical image segmentation of pathological structures (e. g. lesions), where the annotation process is much more subjective, and affected by a number underlying factors, including the annotation protocol, rater education/experience, and clinical aims, among others.
no code implementations • 2 Aug 2021 • Brennan Nichyporuk, Jillian Cardinell, Justin Szeto, Raghav Mehta, Sotirios Tsaftaris, Douglas L. Arnold, Tal Arbel
Many automatic machine learning models developed for focal pathology (e. g. lesions, tumours) detection and segmentation perform well, but do not generalize as well to new patient cohorts, impeding their widespread adoption into real clinical contexts.
no code implementations • 27 Jul 2021 • Brennan Nichyporuk, Justin Szeto, Douglas L. Arnold, Tal Arbel
There are many clinical contexts which require accurate detection and segmentation of all focal pathologies (e. g. lesions, tumours) in patient images.
no code implementations • 1 Mar 2021 • Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices.