Search Results for author: Philip Sperl

Found 12 papers, 3 papers with code

Imbalance in Regression Datasets

no code implementations19 Feb 2024 Daniel Kowatsch, Nicolas M. Müller, Kilian Tscharke, Philip Sperl, Konstantin Bötinger

For classification, the problem of class imbalance is well known and has been extensively studied.

regression

Physical Adversarial Examples for Multi-Camera Systems

no code implementations14 Nov 2023 Ana Răduţoiu, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger

Neural networks build the foundation of several intelligent systems, which, however, are known to be easily fooled by adversarial examples.

Autonomous Vehicles Data Augmentation +2

Protecting Publicly Available Data With Machine Learning Shortcuts

no code implementations30 Oct 2023 Nicolas M. Müller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl, Konstantin Böttinger

Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability.

Complex-valued neural networks for voice anti-spoofing

no code implementations22 Aug 2023 Nicolas M. Müller, Philip Sperl, Konstantin Böttinger

Current anti-spoofing and audio deepfake detection systems use either magnitude spectrogram-based features (such as CQT or Melspectrograms) or raw audio processed through convolution or sinc-layers.

DeepFake Detection Face Swapping +1

Shortcut Detection with Variational Autoencoders

1 code implementation8 Feb 2023 Nicolas M. Müller, Simon Roschmann, Shahbaz Khan, Philip Sperl, Konstantin Böttinger

For real-world applications of machine learning (ML), it is essential that models make predictions based on well-generalizing features rather than spurious correlations in the data.

Disentanglement

Optimizing Information Loss Towards Robust Neural Networks

no code implementations7 Aug 2020 Philip Sperl, Konstantin Böttinger

To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call \textit{entropic retraining}.

$\text{A}^3$: Activation Anomaly Analysis

1 code implementation3 Mar 2020 Philip Sperl, Jan-Philipp Schulze, Konstantin Böttinger

Based on the activation values in the target network, the alarm network decides if the given sample is normal.

Semi-supervised Anomaly Detection Supervised Anomaly Detection

DLA: Dense-Layer-Analysis for Adversarial Example Detection

no code implementations5 Nov 2019 Philip Sperl, Ching-Yu Kao, Peng Chen, Konstantin Böttinger

In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model's performance.

Autonomous Driving General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.