Foundation Models Meet Imbalanced Single-Cell Data When Learning Cell Type Annotations

With the emergence of single-cell foundation models, an important question arises: how do these models perform when trained on datasets having an imbalance in cell type distribution due to rare cell types or biased sampling? We benchmark three foundation models, scGPT, scBERT, and Geneformer, using skewed single-cell cell-type distribution for cell-type annotation. While all models had reduced performance when challenged with rare cell types, scGPT and scBERT, performed better than Geneformer. Notably, in contrast to scGPT and scBERT, Geneformer uses ordinal positions of the tokenized genes rather than actual raw gene expression values. To mitigate the effect of a skewed distribution, we find that random oversampling, but not random undersampling, improved the performance for all three foundation models. Finally, scGPT, using FlashAttention, has the fastest computational speed, whereas scBERT is more memory-efficient. We conclude that tokenization and data representation are essential areas of research, and new strategies are needed to mitigate the effects of imbalanced learning in single-cell foundation models. Code and data for reproducibility are available at https://github.com/SabbaghCodes/ImbalancedLearningForSingleCellFoundationModels.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here