Search Results for author: Björn Eskofier

Found 7 papers, 2 papers with code

Achieving Efficient and Realistic Full-Radar Simulations and Automatic Data Annotation by exploiting Ray Meta Data of a Radar Ray Tracing Simulator

no code implementations23 May 2023 Christian Schüßler, Marcel Hoffmann, Vanessa Wirth, Björn Eskofier, Tim Weyrich, Marc Stamminger, Martin Vossiek

This approach allows not only almost perfect annotations possible, but also allows the annotation of exotic effects, such as multi-path effects or to label signal parts originating from different parts of an object.

Object

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

no code implementations17 May 2023 Thomas Altstidl, David Dobre, Björn Eskofier, Gauthier Gidel, Leo Schwinn

In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses.

Adversarial Robustness

FastAMI -- a Monte Carlo Approach to the Adjustment for Chance in Clustering Comparison Metrics

2 code implementations3 May 2023 Kai Klede, Leo Schwinn, Dario Zanca, Björn Eskofier

Clustering is at the very core of machine learning, and its applications proliferate with the increasing availability of data.

Clustering

Just a Matter of Scale? Reevaluating Scale Equivariance in Convolutional Neural Networks

1 code implementation18 Nov 2022 Thomas Altstidl, An Nguyen, Leo Schwinn, Franz Köferl, Christopher Mutschler, Björn Eskofier, Dario Zanca

We also demonstrate that our family of models is able to generalize well towards larger scales and improve scale equivariance.

Behind the Machine's Gaze: Neural Networks with Biologically-inspired Constraints Exhibit Human-like Visual Attention

no code implementations19 Apr 2022 Leo Schwinn, Doina Precup, Björn Eskofier, Dario Zanca

By and large, existing computational models of visual attention tacitly assume perfect vision and full access to the stimulus and thereby deviate from foveated biological vision.

Towards Rapid and Robust Adversarial Training with One-Step Attacks

no code implementations24 Feb 2020 Leo Schwinn, René Raab, Björn Eskofier

Further, we add a learnable regularization step prior to the neural network, which we call Pixelwise Noise Injection Layer (PNIL).

Cannot find the paper you are looking for? You can Submit a new open access paper.