Search Results for author: Markus Kollmann

Found 6 papers, 3 papers with code

Rethinking cluster-conditioned diffusion models

no code implementations1 Mar 2024 Nikolas Adaloglou, Tim Kaiser, Felix Michels, Markus Kollmann

We present a comprehensive experimental study on image-level conditioning for diffusion models using cluster assignments.

Clustering Conditional Image Generation +1

Exploring the Limits of Deep Image Clustering using Pretrained Models

1 code implementation31 Mar 2023 Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann

We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors.

 Ranked #1 on Image Clustering on CIFAR-10 (using extra training data)

Clustering Image Clustering

Adapting Contrastive Language-Image Pretrained (CLIP) Models for Out-of-Distribution Detection

1 code implementation10 Mar 2023 Nikolas Adaloglou, Felix Michels, Tim Kaiser, Markus Kollmann

Intriguingly, we show that (i) PLP outperforms the previous state-of-the-art \citep{ming2022mcm} on all $5$ large-scale benchmarks based on ImageNet, specifically by an average AUROC gain of 3. 4\% using the largest CLIP model (ViT-G), (ii) we show that linear probing outperforms fine-tuning by large margins for CLIP architectures (i. e.

Anomaly Detection Image Captioning +3

Self-Supervised Anomaly Detection by Self-Distillation and Negative Sampling

1 code implementation17 Jan 2022 Nima Rafiee, Rahil Gholamipoorfard, Nikolas Adaloglou, Simon Jaxy, Julius Ramakers, Markus Kollmann

Detecting whether examples belong to a given in-distribution or are Out-Of-Distribution (OOD) requires identifying features specific to the in-distribution.

Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +1

A General Framework for Unsupervised Anomaly Detection

no code implementations1 Jan 2021 Nima Rafiee, Rahil Gholamipoor, Markus Kollmann

In this paper we present GenAD, a simple and generic framework for detecting examples that lie out-of-distribution for a given training set.

Out-of-Distribution Detection Unsupervised Anomaly Detection

Unsupervised Anomaly Detection From Semantic Similarity Scores

no code implementations1 Dec 2020 Nima Rafiee, Rahil Gholamipoor, Markus Kollmann

Classifying samples as in-distribution or out-of-distribution (OOD) is a challenging problem of anomaly detection and a strong test of the generalisation power for models of the in-distribution.

Out-of-Distribution Detection Semantic Similarity +2

Cannot find the paper you are looking for? You can Submit a new open access paper.