Search Results for author: Jan Niklas Böhm

Found 4 papers, 4 papers with code

Self-supervised Visualisation of Medical Image Datasets

1 code implementation22 Feb 2024 Ifeoma Veronica Nwabufo, Jan Niklas Böhm, Philipp Berens, Dmitry Kobak

Self-supervised learning methods based on data augmentations, such as SimCLR, BYOL, or DINO, allow obtaining semantically meaningful representations of image datasets and are widely used prior to supervised fine-tuning.

Contrastive Learning Self-Supervised Learning

Unsupervised visualization of image datasets using contrastive learning

1 code implementation18 Oct 2022 Jan Niklas Böhm, Philipp Berens, Dmitry Kobak

This problem can be circumvented by self-supervised approaches based on contrastive learning, such as SimCLR, relying on data augmentation to generate implicit neighbors, but these methods do not produce two-dimensional embeddings suitable for visualization.

Contrastive Learning Data Augmentation

From $t$-SNE to UMAP with contrastive learning

2 code implementations3 Jun 2022 Sebastian Damrich, Jan Niklas Böhm, Fred A. Hamprecht, Dmitry Kobak

We exploit this new conceptual connection to propose and implement a generalization of negative sampling, allowing us to interpolate between (and even extrapolate beyond) $t$-SNE and UMAP and their respective embeddings.

Contrastive Learning Representation Learning

Attraction-Repulsion Spectrum in Neighbor Embeddings

1 code implementation17 Jul 2020 Jan Niklas Böhm, Philipp Berens, Dmitry Kobak

Neighbor embeddings are a family of methods for visualizing complex high-dimensional datasets using $k$NN graphs.

Cannot find the paper you are looking for? You can Submit a new open access paper.