no code implementations • 1 Mar 2024 • Nikolas Adaloglou, Tim Kaiser, Felix Michels, Markus Kollmann
We present a comprehensive experimental study on image-level conditioning for diffusion models using cluster assignments.
1 code implementation • 31 Mar 2023 • Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann
We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors.
Ranked #1 on Image Clustering on CIFAR-10 (using extra training data)
1 code implementation • 10 Mar 2023 • Nikolas Adaloglou, Felix Michels, Tim Kaiser, Markus Kollmann
Intriguingly, we show that (i) PLP outperforms the previous state-of-the-art \citep{ming2022mcm} on all $5$ large-scale benchmarks based on ImageNet, specifically by an average AUROC gain of 3. 4\% using the largest CLIP model (ViT-G), (ii) we show that linear probing outperforms fine-tuning by large margins for CLIP architectures (i. e.
1 code implementation • 17 Jan 2022 • Nima Rafiee, Rahil Gholamipoorfard, Nikolas Adaloglou, Simon Jaxy, Julius Ramakers, Markus Kollmann
Detecting whether examples belong to a given in-distribution or are Out-Of-Distribution (OOD) requires identifying features specific to the in-distribution.
Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +1
no code implementations • 1 Jan 2021 • Nima Rafiee, Rahil Gholamipoor, Markus Kollmann
In this paper we present GenAD, a simple and generic framework for detecting examples that lie out-of-distribution for a given training set.
Out-of-Distribution Detection Unsupervised Anomaly Detection
no code implementations • 1 Dec 2020 • Nima Rafiee, Rahil Gholamipoor, Markus Kollmann
Classifying samples as in-distribution or out-of-distribution (OOD) is a challenging problem of anomaly detection and a strong test of the generalisation power for models of the in-distribution.