1 code implementation • 1 Apr 2024 • Paul Gavrikov, Janis Keuper
The robust generalization of models to rare, in-distribution (ID) samples drawn from the long tail of the training distribution and to out-of-training-distribution (OOD) samples is one of the major challenges of current deep learning methods.
1 code implementation • 14 Mar 2024 • Paul Gavrikov, Jovita Lukasik, Steffen Jung, Robert Geirhos, Bianca Lamm, Muhammad Jehanzeb Mirza, Margret Keuper, Janis Keuper
If text does indeed influence visual biases, this suggests that we may be able to steer visual biases not just through visual input but also through language: a hypothesis that we confirm through extensive experiments.
1 code implementation • 24 Aug 2023 • Paul Gavrikov, Janis Keuper
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards.
1 code implementation • 12 Aug 2023 • Paul Gavrikov, Janis Keuper
It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN).
1 code implementation • 22 Mar 2023 • Paul Gavrikov, Janis Keuper, Margret Keuper
Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations.
no code implementations • 26 Jan 2023 • Paul Gavrikov, Janis Keuper
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size.
1 code implementation • 25 Oct 2022 • Paul Gavrikov, Janis Keuper
However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains.
1 code implementation • 12 Oct 2022 • Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper
Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
1 code implementation • 5 Apr 2022 • Paul Gavrikov, Janis Keuper
Deep learning models are intrinsically sensitive to distribution shifts in the input data.
1 code implementation • CVPR 2022 • Paul Gavrikov, Janis Keuper
In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth.
Ranked #7 on Image Classification on Fashion-MNIST
1 code implementation • 20 Jan 2022 • Paul Gavrikov, Janis Keuper
We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.