no code implementations • 26 Mar 2024 • Patrick Grommelt, Louis Weiss, Franz-Josef Pfreundt, Janis Keuper
In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.
no code implementations • AAAI Workshop AdvML 2022 • Kalun Ho, Franz-Josef Pfreundt, Janis Keuper, Margret Keuper
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet.
no code implementations • 21 May 2021 • Ricard Durall, Stanislav Frolov, Jörn Hees, Federico Raue, Franz-Josef Pfreundt, Andreas Dengel, Janis Keupe
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks.
3 code implementations • 4 Mar 2021 • Paula Harder, Franz-Josef Pfreundt, Margret Keuper, Janis Keuper
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions.
no code implementations • 16 Dec 2020 • Ricard Durall, Kalun Ho, Franz-Josef Pfreundt, Janis Keuper
In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model.
no code implementations • 6 Jul 2020 • Kalun Ho, Janis Keuper, Franz-Josef Pfreundt, Margret Keuper
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings.
no code implementations • 7 Feb 2020 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes.
1 code implementation • https://ieeexplore.ieee.org/document/8950672 2020 • Raju Ram, Sabine Müller, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper
Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy.
6 code implementations • 2 Nov 2019 • Ricard Durall, Margret Keuper, Franz-Josef Pfreundt, Janis Keuper
In this work, we present a simple way to detect such fake face images - so-called DeepFakes.
no code implementations • 8 Oct 2019 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications.
1 code implementation • 26 Sep 2019 • Avraam Chatzimichailidis, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper
Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods.
no code implementations • 23 Sep 2019 • Ricard Durall, Franz-Josef Pfreundt, Ullrich Köthe, Janis Keuper
Recent deep learning based approaches have shown remarkable success on object segmentation tasks.
1 code implementation • 29 May 2019 • Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training.
1 code implementation • 27 Aug 2018 • Dominik Marek Loroch, Franz-Josef Pfreundt, Norbert Wehn, Janis Keuper
Various approaches have been investigated to reduce the necessary resources, one of which is to leverage the sparsity occurring in deep neural networks due to the high levels of redundancy in the network parameters.
2 code implementations • 13 Oct 2017 • Dominik Marek Loroch, Norbert Wehn, Franz-Josef Pfreundt, Janis Keuper
While most related publications validate the proposed approach on a single DNN topology, it appears to be evident, that the optimal choice of the quantization method and number of coding bits is topology dependent.
no code implementations • 31 May 2017 • Martin Kuehn, Janis Keuper, Franz-Josef Pfreundt
I/O is an other bottleneck to work with DDNs in a standard parallel HPC setting, which we will consider in more detail in a forthcoming paper.
no code implementations • 22 Sep 2016 • Janis Keuper, Franz-Josef Pfreundt
This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs).
no code implementations • 19 May 2015 • Janis Keuper, Franz-Josef Pfreundt
In this context, Stochastic Gradient Descent (SGD) methods have long proven to provide good results, both in terms of convergence and accuracy.
Distributed, Parallel, and Cluster Computing