Search Results for author: Franz-Josef Pfreundt

Found 18 papers, 7 papers with code

Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets

no code implementations26 Mar 2024 Patrick Grommelt, Louis Weiss, Franz-Josef Pfreundt, Janis Keuper

In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.

Misinformation

Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space

no code implementations AAAI Workshop AdvML 2022 Kalun Ho, Franz-Josef Pfreundt, Janis Keuper, Margret Keuper

Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet.

Clustering Image Classification

Combining Transformer Generators with Convolutional Discriminators

no code implementations21 May 2021 Ricard Durall, Stanislav Frolov, Jörn Hees, Federico Raue, Franz-Josef Pfreundt, Andreas Dengel, Janis Keupe

Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks.

Data Augmentation Image Generation +1

SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain

3 code implementations4 Mar 2021 Paula Harder, Franz-Josef Pfreundt, Margret Keuper, Janis Keuper

Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions.

Adversarial Attack

Latent Space Conditioning on Generative Adversarial Networks

no code implementations16 Dec 2020 Ricard Durall, Kalun Ho, Franz-Josef Pfreundt, Janis Keuper

In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model.

Image Generation Representation Learning

Learning Embeddings for Image Clustering: An Empirical Study of Triplet Loss Approaches

no code implementations6 Jul 2020 Kalun Ho, Janis Keuper, Franz-Josef Pfreundt, Margret Keuper

In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings.

Clustering Image Classification +1

Local Facial Attribute Transfer through Inpainting

no code implementations7 Feb 2020 Ricard Durall, Franz-Josef Pfreundt, Janis Keuper

The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes.

Attribute Generative Adversarial Network

Scalable Hyperparameter Optimization with Lazy Gaussian Processes

1 code implementation https://ieeexplore.ieee.org/document/8950672 2020 Raju Ram, Sabine Müller, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper

Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy.

Bayesian Optimization Gaussian Processes +1

Unmasking DeepFakes with simple Features

5 code implementations2 Nov 2019 Ricard Durall, Margret Keuper, Franz-Josef Pfreundt, Janis Keuper

In this work, we present a simple way to detect such fake face images - so-called DeepFakes.

DeepFake Detection

Semi Few-Shot Attribute Translation

no code implementations8 Oct 2019 Ricard Durall, Franz-Josef Pfreundt, Janis Keuper

Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications.

Attribute Few-Shot Learning +3

GradVis: Visualization and Second Order Analysis of Optimization Surfaces during the Training of Deep Neural Networks

1 code implementation26 Sep 2019 Avraam Chatzimichailidis, Franz-Josef Pfreundt, Nicolas R. Gauger, Janis Keuper

Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods.

Object Segmentation using Pixel-wise Adversarial Loss

no code implementations23 Sep 2019 Ricard Durall, Franz-Josef Pfreundt, Ullrich Köthe, Janis Keuper

Recent deep learning based approaches have shown remarkable success on object segmentation tasks.

Object Segmentation +1

Stabilizing GANs with Soft Octave Convolutions

1 code implementation29 May 2019 Ricard Durall, Franz-Josef Pfreundt, Janis Keuper

The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training.

Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant

1 code implementation27 Aug 2018 Dominik Marek Loroch, Franz-Josef Pfreundt, Norbert Wehn, Janis Keuper

Various approaches have been investigated to reduce the necessary resources, one of which is to leverage the sparsity occurring in deep neural networks due to the high levels of redundancy in the network parameters.

Autonomous Driving

TensorQuant - A Simulation Toolbox for Deep Neural Network Quantization

2 code implementations13 Oct 2017 Dominik Marek Loroch, Norbert Wehn, Franz-Josef Pfreundt, Janis Keuper

While most related publications validate the proposed approach on a single DNN topology, it appears to be evident, that the optimal choice of the quantization method and number of coding bits is topology dependent.

Quantization

Using GPI-2 for Distributed Memory Paralleliziation of the Caffe Toolbox to Speed up Deep Neural Network Training

no code implementations31 May 2017 Martin Kuehn, Janis Keuper, Franz-Josef Pfreundt

I/O is an other bottleneck to work with DDNs in a standard parallel HPC setting, which we will consider in more detail in a forthcoming paper.

Blocking valid

Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability

no code implementations22 Sep 2016 Janis Keuper, Franz-Josef Pfreundt

This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs).

Asynchronous Parallel Stochastic Gradient Descent - A Numeric Core for Scalable Distributed Machine Learning Algorithms

no code implementations19 May 2015 Janis Keuper, Franz-Josef Pfreundt

In this context, Stochastic Gradient Descent (SGD) methods have long proven to provide good results, both in terms of convergence and accuracy.

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.