no code implementations • 1 Mar 2024 • Nikolas Adaloglou, Tim Kaiser, Felix Michels, Markus Kollmann
We present a comprehensive experimental study on image-level conditioning for diffusion models using cluster assignments.
1 code implementation • 31 Mar 2023 • Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann
We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors.
Ranked #1 on Image Clustering on CIFAR-10 (using extra training data)
1 code implementation • 10 Mar 2023 • Nikolas Adaloglou, Felix Michels, Tim Kaiser, Markus Kollmann
Intriguingly, we show that (i) PLP outperforms the previous state-of-the-art \citep{ming2022mcm} on all $5$ large-scale benchmarks based on ImageNet, specifically by an average AUROC gain of 3. 4\% using the largest CLIP model (ViT-G), (ii) we show that linear probing outperforms fine-tuning by large margins for CLIP architectures (i. e.
1 code implementation • 17 Jan 2022 • Nima Rafiee, Rahil Gholamipoorfard, Nikolas Adaloglou, Simon Jaxy, Julius Ramakers, Markus Kollmann
Detecting whether examples belong to a given in-distribution or are Out-Of-Distribution (OOD) requires identifying features specific to the in-distribution.
Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +1
1 code implementation • 24 Jul 2020 • Nikolas Adaloglou, Theocharis Chatzis, Ilias Papastratis, Andreas Stergioulas, Georgios Th. Papadopoulos, Vassia Zacharopoulou, George J. Xydopoulos, Klimnis Atzakas, Dimitris Papazachariou, Petros Daras
In this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted.
no code implementations • ECCV 2020 • Nikolas Adaloglou, Nicholas Vretos, Petros Daras
In this paper, a novel multi-view methodology for graph-based neural networks is proposed.
1 code implementation • ebook 2020 • Nikolas Adaloglou, Sergios Karagianakos
We do hope that this series will provide you a big overview of the field, so that you will not need to read all the literature by yourself, independent of your background on GANs.