Search Results for author: Gregor Koehler

Found 10 papers, 4 papers with code

Constrained Parameter Regularization

1 code implementation15 Nov 2023 Jörg K. H. Franke, Michael Hefenbrock, Gregor Koehler, Frank Hutter

Instead of applying a single constant penalty to all parameters, we enforce an upper bound on a statistical measure (e. g., the L$_2$-norm) of parameter groups.

Image Classification Language Modelling

RecycleNet: Latent Feature Recycling Leads to Iterative Decision Refinement

no code implementations14 Sep 2023 Gregor Koehler, Tassilo Wald, Constantin Ulrich, David Zimmerer, Paul F. Jaeger, Jörg K. H. Franke, Simon Kohl, Fabian Isensee, Klaus H. Maier-Hein

Using medical image segmentation as the evaluation environment, we show that latent feature recycling enables the network to iteratively refine initial predictions even beyond the iterations seen during training, converging towards an improved decision.

Decision Making Image Segmentation +3

MedNeXt: Transformer-driven Scaling of ConvNets for Medical Image Segmentation

1 code implementation17 Mar 2023 Saikat Roy, Gregor Koehler, Constantin Ulrich, Michael Baumgartner, Jens Petersen, Fabian Isensee, Paul F. Jaeger, Klaus Maier-Hein

This leads to state-of-the-art performance on 4 tasks on CT and MRI modalities and varying dataset sizes, representing a modernized deep architecture for medical image segmentation.

Image Segmentation Segmentation +2

CRADL: Contrastive Representations for Unsupervised Anomaly Detection and Localization

no code implementations5 Jan 2023 Carsten T. Lüth, David Zimmerer, Gregor Koehler, Paul F. Jaeger, Fabian Isensee, Jens Petersen, Klaus H. Maier-Hein

By utilizing the representations of contrastive learning, we aim to fix the over-fixation on low-level features and learn more semantic-rich representations.

Contrastive Learning Unsupervised Anomaly Detection

Frequency Decomposition in Neural Processes

no code implementations1 Jan 2021 Jens Petersen, Paul F Jaeger, Gregor Koehler, David Zimmerer, Fabian Isensee, Klaus Maier-Hein

Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations.

Cannot find the paper you are looking for? You can Submit a new open access paper.