Search Results for author: Lukas Schott

Found 9 papers, 5 papers with code

Challenging Common Paradigms in Multi-Task Learning

no code implementations8 Nov 2023 Cathrin Elich, Lukas Kirchdorfer, Jan M. Köhler, Lukas Schott

Second, the notion of gradient conflicts has often been phrased as a specific problem in MTL.

Multi-Task Learning

Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles

1 code implementation6 Oct 2022 Martin Bjerke, Lukas Schott, Kristopher T. Jensen, Claudia Battistin, David A. Klindt, Benjamin A. Dunn

These innovations lead to more interpretable models of neural population activity that train well and perform better even on mixtures of complex latent manifolds.

Gaussian Processes

Score-Based Generative Classifiers

no code implementations1 Oct 2021 Roland S. Zimmermann, Lukas Schott, Yang song, Benjamin A. Dunn, David A. Klindt

In this work, we investigate score-based generative models as classifiers for natural images.

Classification

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

Learned Watershed: End-to-End Learning of Seeded Segmentation

no code implementations ICCV 2017 Steffen Wolf, Lukas Schott, Ullrich Köthe, Fred Hamprecht

Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm.

Segmentation

Comparative Study of Deep Learning Software Frameworks

no code implementations19 Nov 2015 Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah

The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings.

Cannot find the paper you are looking for? You can Submit a new open access paper.