no code implementations • 16 Dec 2024 • Wenhui Cui, Haleh Akrami, Anand A. Joshi, Richard M. Leahy
We overcome this limitation by introducing a novel representation learning strategy integrating meta-learning with self-supervised learning to improve the generalization from normal to clinical features.
no code implementations • 8 Feb 2024 • Kelly Payette, Céline Steger, Roxane Licandro, Priscille de Dumast, Hongwei Bran Li, Matthew Barkovich, Liu Li, Maik Dannecker, Chen Chen, Cheng Ouyang, Niccolò McConnell, Alina Miron, Yongmin Li, Alena Uus, Irina Grigorescu, Paula Ramirez Gilliland, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Haoyu Wang, Ziyan Huang, Jin Ye, Mireia Alenyà, Valentin Comte, Oscar Camara, Jean-Baptiste Masson, Astrid Nilsson, Charlotte Godard, Moona Mazher, Abdul Qayyum, Yibo Gao, Hangqi Zhou, Shangqi Gao, Jia Fu, Guiming Dong, Guotai Wang, ZunHyan Rieu, HyeonSik Yang, Minwoo Lee, Szymon Płotka, Michal K. Grzeszczyk, Arkadiusz Sitek, Luisa Vargas Daza, Santiago Usma, Pablo Arbelaez, Wenying Lu, WenHao Zhang, Jing Liang, Romain Valabregue, Anand A. Joshi, Krishna N. Nayak, Richard M. Leahy, Luca Wilhelmi, Aline Dändliker, Hui Ji, Antonio G. Gennari, Anton Jakovčić, Melita Klaić, Ana Adžić, Pavel Marković, Gracia Grabarić, Gregor Kasprian, Gregor Dovjak, Milan Rados, Lana Vasung, Meritxell Bach Cuadra, Andras Jakab
The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms.
no code implementations • 21 Dec 2023 • Wenhui Cui, Haleh Akrami, Ganning Zhao, Anand A. Joshi, Richard M. Leahy
To explore the generalizability of the foundation model in downstream applications, we then apply the model to an unseen TBI dataset for prediction of PTE using zero-shot learning.
1 code implementation • 7 Nov 2023 • Wenhui Cui, Woojae Jeong, Philipp Thölke, Takfarinas Medani, Karim Jerbi, Anand A. Joshi, Richard M. Leahy
To handle the scarcity and heterogeneity of electroencephalography (EEG) data for Brain-Computer Interface (BCI) tasks, and to harness the power of large publicly available data sets, we propose Neuro-GPT, a foundation model consisting of an EEG encoder and a GPT model.
no code implementations • 16 Dec 2022 • Wenhui Cui, Haleh Akrami, Anand A. Joshi, Richard M. Leahy
Transferring knowledge from a source domain with abundant training data to a target domain is effective for improving representation learning on scarce training data.
no code implementations • 3 Dec 2022 • Hedong Zhang, Anand A. Joshi
In this work, we used a semi-supervised learning method to train deep learning model that can segment the brain MRI images.
1 code implementation • 3 Mar 2022 • Wenhui Cui, Haleh Akrami, Anand A. Joshi, Richard M. Leahy
The amount of manually labeled data is limited in medical applications, so semi-supervised learning and automatic labeling strategies can be an asset for training deep neural networks.
no code implementations • 13 Dec 2020 • Anand A. Joshi, Soyoung Choi, Haleh Akrami, Richard M. Leahy
While pointwise analysis methods are common in anatomical studies such as cortical thickness analysis and voxel- and tensor-based morphometry and its variants, such a method is lacking for rs-fMRI and could improve the utility of rs-fMRI for group studies.
no code implementations • 18 Oct 2020 • Haleh Akrami, Anand A. Joshi, Sergul Aydore, Richard M. Leahy
Using estimated quantiles to compute mean and variance under the Gaussian assumption, we compute reconstruction probability as a principled approach to outlier or anomaly detection.
no code implementations • 15 Jun 2020 • Haleh Akrami, Sergul Aydore, Richard M. Leahy, Anand A. Joshi
The source of outliers in training data include the data collection process itself (random noise) or a malicious attacker (data poisoning) who may target to degrade the performance of the machine learning model.
no code implementations • 23 May 2019 • Haleh Akrami, Anand A. Joshi, Jian Li, Sergul Aydore, Richard M. Leahy
Machine learning methods often need a large amount of labeled training data.