Search Results for author: Paul Gavrikov

Found 11 papers, 10 papers with code

Can Biases in ImageNet Models Explain Generalization?

1 code implementation1 Apr 2024 Paul Gavrikov, Janis Keuper

The robust generalization of models to rare, in-distribution (ID) samples drawn from the long tail of the training distribution and to out-of-training-distribution (OOD) samples is one of the major challenges of current deep learning methods.

Image Classification

Are Vision Language Models Texture or Shape Biased and Can We Steer Them?

1 code implementation14 Mar 2024 Paul Gavrikov, Jovita Lukasik, Steffen Jung, Robert Geirhos, Bianca Lamm, Muhammad Jehanzeb Mirza, Margret Keuper, Janis Keuper

If text does indeed influence visual biases, this suggests that we may be able to steer visual biases not just through visual input but also through language: a hypothesis that we confirm through extensive experiments.

Image Captioning Image Classification +3

Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers

1 code implementation24 Aug 2023 Paul Gavrikov, Janis Keuper

Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards.

Adversarial Robustness Image Classification

On the Interplay of Convolutional Padding and Adversarial Robustness

1 code implementation12 Aug 2023 Paul Gavrikov, Janis Keuper

It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN).

Adversarial Robustness

An Extended Study of Human-like Behavior under Adversarial Training

1 code implementation22 Mar 2023 Paul Gavrikov, Janis Keuper, Margret Keuper

Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations.

The Power of Linear Combinations: Learning with Random Convolutions

no code implementations26 Jan 2023 Paul Gavrikov, Janis Keuper

Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size.

Image Classification Inductive Bias

Does Medical Imaging learn different Convolution Filters?

1 code implementation25 Oct 2022 Paul Gavrikov, Janis Keuper

However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains.

Robust Models are less Over-Confident

1 code implementation12 Oct 2022 Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper

Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.

Adversarial Robustness

Adversarial Robustness through the Lens of Convolutional Filters

1 code implementation5 Apr 2022 Paul Gavrikov, Janis Keuper

Deep learning models are intrinsically sensitive to distribution shifts in the input data.

Adversarial Robustness

CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters

1 code implementation CVPR 2022 Paul Gavrikov, Janis Keuper

In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth.

Image Classification

An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters

1 code implementation20 Jan 2022 Paul Gavrikov, Janis Keuper

We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.