no code implementations • 17 Feb 2023 • Julian Rodemann, Jann Goschenhofer, Emilio Dorigatti, Thomas Nagler, Thomas Augustin
We derive this selection criterion by proving Bayes optimality of the posterior predictive of pseudo-samples.
1 code implementation • 24 Oct 2022 • Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti
While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.
1 code implementation • 14 Sep 2022 • Emilio Dorigatti, Bernd Bischl, Benjamin Schubert
Accurate in silico modeling of the antigen processing pathway is crucial to enable personalized epitope vaccine design for cancer.
no code implementations • 14 Sep 2022 • Shunjie-Fabian Zheng, JaeEun Nam, Emilio Dorigatti, Bernd Bischl, Shekoofeh Azizi, Mina Rezaei
However, existing methods for joint clustering and contrastive learning do not perform well on long-tailed data distributions, as majority classes overwhelm and distort the loss of minority classes, thus preventing meaningful representations to be learned.
1 code implementation • 6 Sep 2022 • Emilio Dorigatti, Jonas Schweisthal, Bernd Bischl, Mina Rezaei
Learning from positive and unlabeled (PU) data is a setting where the learner only has access to positive and unlabeled samples while having no information on negative examples.
no code implementations • 31 Jan 2022 • Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl
In this work, we thus propose to tackle the issues of imbalanced datasets and model calibration in a PUL setting through an uncertainty-aware pseudo-labeling procedure (PUUPL): by boosting the signal from the minority class, pseudo-labeling expands the labeled dataset with new samples from the unlabeled set, while explicit uncertainty quantification prevents the emergence of harmful confirmation bias leading to increased predictive performance.
no code implementations • 11 Sep 2021 • Mina Rezaei, Emilio Dorigatti, David Ruegamer, Bernd Bischl
We simultaneously train two deep learning models, a deep representation network that captures the data distribution, and a deep clustering network that learns embedded features and performs clustering.
no code implementations • 3 Jan 2021 • Cornelius Fritz, Emilio Dorigatti, David Rügamer
The results corroborate the necessity of including mobility data and showcase the flexibility and interpretability of our approach.