no code implementations • 22 Nov 2024 • Lars Nieradzik, Henrike Stephani, Janis Keuper
This research makes an important contribution to the development of attribution maps by providing a reliable and consistent evaluation framework.
no code implementations • 18 Nov 2024 • Lars Nieradzik, Henrike Stephani, Jördis Sieburg-Rockel, Stephanie Helmling, Andrea Olbrich, Stephanie Wrage, Janis Keuper
Wood species identification plays a crucial role in various industries, from ensuring the legality of timber products to advancing ecological conservation efforts.
1 code implementation • 18 Oct 2024 • Paul Gavrikov, Shashank Agnihotri, Margret Keuper, Janis Keuper
Our findings reveal that the training method strongly influences which layers become critical to the decision function for a given task.
no code implementations • 7 Sep 2024 • Lars Nieradzik, Henrike Stephani, Janis Keuper
This paper introduces Top-GAP, a novel regularization technique that enhances the explainability and robustness of convolutional neural networks.
no code implementations • 28 Aug 2024 • Bianca Lamm, Janis Keuper
This paper analyzes the performance and limits of various VLMs in the context of VQA and OCR [5, 9, 12] tasks in a production-level scenario.
Optical Character Recognition Optical Character Recognition (OCR) +3
no code implementations • 11 Jun 2024 • Shashank Agnihotri, Julia Grabinski, Janis Keuper, Margret Keuper
Image restoration networks are usually comprised of an encoder and a decoder, responsible for aggregating image content from noisy, distorted data and to restore clean, undistorted images, respectively.
no code implementations • 14 May 2024 • Luisa Schwirten, Jannes Scholz, Daniel Kondermann, Janis Keuper
Datasets labelled by human annotators are widely used in the training and testing of machine learning models.
1 code implementation • CVPR 2024 • Paul Gavrikov, Janis Keuper
The robust generalization of models to rare, in-distribution (ID) samples drawn from the long tail of the training distribution and to out-of-training-distribution (OOD) samples is one of the major challenges of current deep learning methods.
no code implementations • 26 Mar 2024 • Patrick Grommelt, Louis Weiss, Franz-Josef Pfreundt, Janis Keuper
In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.
no code implementations • 16 Mar 2024 • Martin Spitznagel, Janis Keuper
Data-driven modeling of complex physical systems is receiving a growing amount of attention in the simulation and machine learning communities.
1 code implementation • 14 Mar 2024 • Paul Gavrikov, Jovita Lukasik, Steffen Jung, Robert Geirhos, Bianca Lamm, Muhammad Jehanzeb Mirza, Margret Keuper, Janis Keuper
If text does indeed influence visual biases, this suggests that we may be able to steer visual biases not just through visual input but also through language: a hypothesis that we confirm through extensive experiments.
no code implementations • 18 Feb 2024 • Lars Nieradzik, Henrike Stephani, Jördis Sieburg-Rockel, Stephanie Helmling, Andrea Olbrich, Janis Keuper
In this study, we explore the explainability of neural networks in agriculture and forestry, specifically in fertilizer treatment classification and wood identification.
no code implementations • 12 Jan 2024 • Peter Lorenz, Ricard Durall, Janis Keuper
In recent years, diffusion models (DMs) have drawn significant attention for their success in approximating data distributions, yielding state-of-the-art generative results.
1 code implementation • 29 Sep 2023 • Bianca Lamm, Janis Keuper
In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain.
1 code implementation • 24 Aug 2023 • Paul Gavrikov, Janis Keuper
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards.
1 code implementation • 12 Aug 2023 • Paul Gavrikov, Janis Keuper
It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN).
1 code implementation • 19 Jul 2023 • Julia Grabinski, Janis Keuper, Margret Keuper
To facilitate such a study, several challenges need to be addressed: 1) we need an effective means to train models with large filters (potentially as large as the input data) without increasing the number of learnable parameters 2) the employed convolution operation should be a plug-and-play module that can replace conventional convolutions in a CNN and allow for an efficient implementation in current frameworks 3) the study of filter sizes has to be decoupled from other aspects such as the network width or the number of learnable parameters 4) the cost of the convolution operation itself has to remain manageable i. e. we cannot naively increase the size of the convolution kernel.
1 code implementation • 19 Jul 2023 • Julia Grabinski, Janis Keuper, Margret Keuper
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings.
no code implementations • 18 Jul 2023 • Lars Nieradzik, Jördis Sieburg-Rockel, Stephanie Helmling, Janis Keuper, Thomas Weibel, Andrea Olbrich, Henrike Stephani
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera.
no code implementations • 6 Jul 2023 • Janis Keuper
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community.
no code implementations • 5 Jul 2023 • Peter Lorenz, Ricard Durall, Janis Keuper
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images.
1 code implementation • 5 May 2023 • Daniel Ladwig, Bianca Lamm, Janis Keuper
We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products.
1 code implementation • 22 Mar 2023 • Paul Gavrikov, Janis Keuper, Margret Keuper
Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations.
no code implementations • 26 Jan 2023 • Paul Gavrikov, Janis Keuper
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size.
1 code implementation • 13 Dec 2022 • Peter Lorenz, Margret Keuper, Janis Keuper
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
1 code implementation • 25 Oct 2022 • Paul Gavrikov, Janis Keuper
However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains.
1 code implementation • 12 Oct 2022 • Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper
Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
no code implementations • 24 Jul 2022 • Paula Harder, Duncan Watson-Parris, Philip Stier, Dominik Strassel, Nicolas R. Gauger, Janis Keuper
The original M7 model is used to generate data of input-output pairs to train a neural network on it.
no code implementations • 21 Jul 2022 • Ricard Durall, Ammar Ghanim, Mario Fernandez, Norman Ettrich, Janis Keuper
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing.
no code implementations • 24 Jun 2022 • Ricard Durall, Ammar Ghanim, Norman Ettrich, Janis Keuper
To the best of our knowledge, this study pioneers the unboxing of neural networks for the demultiple process, helping the user to gain insights into the inside running of the network.
1 code implementation • 5 Apr 2022 • Paul Gavrikov, Janis Keuper
Deep learning models are intrinsically sensitive to distribution shifts in the input data.
1 code implementation • 1 Apr 2022 • Julia Grabinski, Steffen Jung, Janis Keuper, Margret Keuper
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks.
1 code implementation • CVPR 2022 • Paul Gavrikov, Janis Keuper
In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth.
Ranked #5 on Image Classification on Fashion-MNIST
1 code implementation • 20 Jan 2022 • Paul Gavrikov, Janis Keuper
We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
no code implementations • 28 Dec 2021 • Ricard Durall, Janis Keuper
In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
2 code implementations • AAAI Workshop AdvML 2022 • Peter Lorenz, Dominik Strassel, Margret Keuper, Janis Keuper
In its most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l-inf perturbations limited to eps = 8/255.
no code implementations • AAAI Workshop AdvML 2022 • Julia Grabinski, Janis Keuper, Margret Keuper
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness.
2 code implementations • ICML Workshop AML 2021 • Peter Lorenz, Paula Harder, Dominik Strassel, Margret Keuper, Janis Keuper
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention.
1 code implementation • 18 Oct 2021 • Ricard Durall, Jireh Jam, Dominik Strassel, Moi Hoon Yap, Janis Keuper
We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes.
no code implementations • 22 Sep 2021 • Paula Harder, Duncan Watson-Parris, Dominik Strassel, Nicolas Gauger, Philip Stier, Janis Keuper
This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time.
no code implementations • AAAI Workshop AdvML 2022 • Kalun Ho, Franz-Josef Pfreundt, Janis Keuper, Margret Keuper
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet.
1 code implementation • 25 Mar 2021 • Febin Sebastian Elayanithottathil, Janis Keuper
Most eCommerce applications, like web-shops have millions of products.
3 code implementations • 4 Mar 2021 • Paula Harder, Franz-Josef Pfreundt, Margret Keuper, Janis Keuper
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions.
no code implementations • 17 Dec 2020 • Ricard Durall, Avraam Chatzimichailidis, Peter Labus, Janis Keuper
This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them.
no code implementations • 16 Dec 2020 • Ricard Durall, Kalun Ho, Franz-Josef Pfreundt, Janis Keuper
In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model.
no code implementations • 1 Dec 2020 • Dominik Strassel, Philipp Reusch, Janis Keuper
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems.
no code implementations • 6 Jul 2020 • Kalun Ho, Janis Keuper, Franz-Josef Pfreundt, Margret Keuper
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings.
2 code implementations • CVPR 2020 • Ricard Durall, Margret Keuper, Janis Keuper
Generative convolutional deep neural networks, e. g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences.
1 code implementation • 26 Feb 2020 • Peter Michael Habelitz, Janis Keuper
We introduce an open source python framework named PHS - Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function.
no code implementations • 7 Feb 2020 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes.
no code implementations • 4 Feb 2020 • Kalun Ho, Janis Keuper, Margret Keuper
Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison.
1 code implementation • https://ieeexplore.ieee.org/document/8950672 2020 • Raju Ram, Sabine Müller, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper
Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy.
6 code implementations • 2 Nov 2019 • Ricard Durall, Margret Keuper, Franz-Josef Pfreundt, Janis Keuper
In this work, we present a simple way to detect such fake face images - so-called DeepFakes.
no code implementations • 8 Oct 2019 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications.
1 code implementation • 26 Sep 2019 • Avraam Chatzimichailidis, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper
Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods.
no code implementations • 23 Sep 2019 • Ricard Durall, Franz-Josef Pfreundt, Ullrich Köthe, Janis Keuper
Recent deep learning based approaches have shown remarkable success on object segmentation tasks.
1 code implementation • 29 May 2019 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training.
1 code implementation • 27 Aug 2018 • Dominik Marek Loroch, Franz-Josef Pfreundt, Norbert Wehn, Janis Keuper
Various approaches have been investigated to reduce the necessary resources, one of which is to leverage the sparsity occurring in deep neural networks due to the high levels of redundancy in the network parameters.
2 code implementations • 13 Oct 2017 • Dominik Marek Loroch, Norbert Wehn, Franz-Josef Pfreundt, Janis Keuper
While most related publications validate the proposed approach on a single DNN topology, it appears to be evident, that the optimal choice of the quantization method and number of coding bits is topology dependent.
no code implementations • 2 Jun 2017 • Valentin Tschannen, Matthias Delescluse, Mathieu Rodriguez, Janis Keuper
The idea to use automated algorithms to determine geological facies from well logs is not new (see e. g Busch et al. (1987); Rabaute (1998)) but the recent and dramatic increase in research in the field of machine learning makes it a good time to revisit the topic.
no code implementations • 31 May 2017 • Martin Kuehn, Janis Keuper, Franz-Josef Pfreundt
I/O is an other bottleneck to work with DDNs in a standard parallel HPC setting, which we will consider in more detail in a forthcoming paper.
no code implementations • 22 Sep 2016 • Janis Keuper, Franz-Josef Pfreundt
This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs).
no code implementations • 19 May 2015 • Janis Keuper, Franz-Josef Pfreundt
In this context, Stochastic Gradient Descent (SGD) methods have long proven to provide good results, both in terms of convergence and accuracy.
Distributed, Parallel, and Cluster Computing