no code implementations • 19 Jan 2024 • Fernando Pérez-García, Harshita Sharma, Sam Bond-Taylor, Kenza Bouzid, Valentina Salvatelli, Maximilian Ilse, Shruthi Bannur, Daniel C. Castro, Anton Schwaighofer, Matthew P. Lungren, Maria Wetscherek, Noel Codella, Stephanie L. Hyland, Javier Alvarez-Valle, Ozan Oktay
We introduce RAD-DINO, a biomedical image encoder pre-trained solely on unimodal biomedical imaging data that obtains similar or greater performance than state-of-the-art biomedical language supervised models on a diverse range of benchmarks.
no code implementations • 20 Dec 2023 • Fernando Pérez-García, Sam Bond-Taylor, Pedro P. Sanchez, Boris van Breugel, Daniel C. Castro, Harshita Sharma, Valentina Salvatelli, Maria T. A. Wetscherek, Hannah Richardson, Matthew P. Lungren, Aditya Nori, Javier Alvarez-Valle, Ozan Oktay, Maximilian Ilse
Biomedical imaging datasets are often small and biased, meaning that real-world performance of predictive models can be substantially lower than expected from internal testing.
no code implementations • CVPR 2023 • Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando Pérez-García, Maximilian Ilse, Daniel C. Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anja Thieme, Anton Schwaighofer, Maria Wetscherek, Matthew P. Lungren, Aditya Nori, Javier Alvarez-Valle, Ozan Oktay
Prior work in biomedical VLP has mostly relied on the alignment of single image and report pairs even though clinical notes commonly refer to prior images.
1 code implementation • 8 Mar 2021 • Maximilian Ilse, Patrick Forré, Max Welling, Joris M. Mooij
Second, for continuous variables and assuming a linear-Gaussian model, we derive equality constraints for the parameters of the observational and interventional distributions.
no code implementations • NeurIPS Workshop ICBINB 2020 • Maurice Frank, Maximilian Ilse
Recent advancements in deep generative modeling make it possible to learn prior distributions from complex data that subsequently can be used for Bayesian inference.
1 code implementation • 4 May 2020 • Maximilian Ilse, Jakub M. Tomczak, Patrick Forré
We argue that causal concepts can be used to explain the success of data augmentation by describing how they can weaken the spurious correlation between the observed domains and the task labels.
3 code implementations • 24 May 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
17 code implementations • ICML 2018 • Maximilian Ilse, Jakub M. Tomczak, Max Welling
Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances.
Ranked #7 on Aerial Scene Classification on UCM (50% as trainset)
no code implementations • 1 Dec 2017 • Jakub M. Tomczak, Maximilian Ilse, Max Welling
The computer-aided analysis of medical scans is a longstanding goal in the medical imaging field.