Search Results for author: Jonas Rauber

Found 13 papers, 12 papers with code

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

1 code implementation10 Aug 2020 Jonas Rauber, Matthias Bethge, Wieland Brendel

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.

Fast Differentiable Clipping-Aware Normalization and Rescaling

1 code implementation15 Jul 2020 Jonas Rauber, Matthias Bethge

When the rescaled perturbation $\eta \vec{\delta}$ is added to a starting point $\vec{x} \in D$ (where $D$ is the data domain, e. g. $D = [0, 1]^n$), the resulting vector $\vec{v} = \vec{x} + \eta \vec{\delta}$ will in general not be in $D$.

Modeling patterns of smartphone usage and their relationship to cognitive health

no code implementations13 Nov 2019 Jonas Rauber, Emily B. Fox, Leon A. Gatys

The ubiquity of smartphone usage in many people's lives make it a rich source of information about a person's mental and cognitive state.

Accurate, reliable and fast robustness evaluation

1 code implementation NeurIPS 2019 Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.

Generalisation in humans and deep neural networks

2 code implementations NeurIPS 2018 Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.

Object Recognition

Adversarial Vision Challenge

2 code implementations6 Aug 2018 Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

6 code implementations ICLR 2018 Wieland Brendel, Jonas Rauber, Matthias Bethge

Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.

BIG-bench Machine Learning

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

6 code implementations13 Jul 2017 Jonas Rauber, Wieland Brendel, Matthias Bethge

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

Adversarial Attack BIG-bench Machine Learning

Comparing deep neural networks against humans: object recognition when the signal gets weaker

1 code implementation21 Jun 2017 Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann

In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.

General Classification Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.