Search Results for author: Konstantin Böttinger

Found 21 papers, 5 papers with code

Physical Adversarial Examples for Multi-Camera Systems

no code implementations14 Nov 2023 Ana Răduţoiu, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger

Neural networks build the foundation of several intelligent systems, which, however, are known to be easily fooled by adversarial examples.

Autonomous Vehicles Data Augmentation +2

Protecting Publicly Available Data With Machine Learning Shortcuts

no code implementations30 Oct 2023 Nicolas M. Müller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl, Konstantin Böttinger

Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability.

Complex-valued neural networks for voice anti-spoofing

no code implementations22 Aug 2023 Nicolas M. Müller, Philip Sperl, Konstantin Böttinger

Current anti-spoofing and audio deepfake detection systems use either magnitude spectrogram-based features (such as CQT or Melspectrograms) or raw audio processed through convolution or sinc-layers.

DeepFake Detection Face Swapping +1

Shortcut Detection with Variational Autoencoders

1 code implementation8 Feb 2023 Nicolas M. Müller, Simon Roschmann, Shahbaz Khan, Philip Sperl, Konstantin Böttinger

For real-world applications of machine learning (ML), it is essential that models make predictions based on well-generalizing features rather than spurious correlations in the data.

Disentanglement

Introducing Model Inversion Attacks on Automatic Speaker Recognition

no code implementations9 Jan 2023 Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger

To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.

Speaker Recognition

Localized Shortcut Removal

no code implementations24 Nov 2022 Nicolas M. Müller, Jochen Jacobs, Jennifer Williams, Konstantin Böttinger

This is often due to the existence of machine learning shortcuts - features in the data that are predictive but unrelated to the problem at hand.

Does Audio Deepfake Detection Generalize?

no code implementations30 Mar 2022 Nicolas M. Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, Konstantin Böttinger

Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research.

DeepFake Detection Face Swapping

Defending Against Adversarial Denial-of-Service Data Poisoning Attacks

no code implementations14 Apr 2021 Nicolas M. Müller, Simon Roschmann, Konstantin Böttinger

Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training dataset to degrade the performance of machine learning models.

Anomaly Detection BIG-bench Machine Learning +2

Deep Reinforcement Learning for Backup Strategies against Adversaries

no code implementations12 Feb 2021 Pascal Debus, Nicolas Müller, Konstantin Böttinger

In this setting, the usual round-robin scheme, which always replaces the oldest backup, is no longer optimal with respect to avoidable exposure.

reinforcement-learning Reinforcement Learning (RL)

Adversarial Vulnerability of Active Transfer Learning

no code implementations26 Jan 2021 Nicolas M. Müller, Konstantin Böttinger

In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner.

Active Learning Data Poisoning +1

Towards Resistant Audio Adversarial Examples

2 code implementations14 Oct 2020 Tom Dörr, Karla Markert, Nicolas M. Müller, Konstantin Böttinger

We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets.

Adversarial Attack speech-recognition +1

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

2 code implementations15 Sep 2020 Nicolas Michael Müller, Daniel Kowatsch, Konstantin Böttinger

Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset.

Data Poisoning regression

Optimizing Information Loss Towards Robust Neural Networks

no code implementations7 Aug 2020 Philip Sperl, Konstantin Böttinger

To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call \textit{entropic retraining}.

$\text{A}^3$: Activation Anomaly Analysis

1 code implementation3 Mar 2020 Philip Sperl, Jan-Philipp Schulze, Konstantin Böttinger

Based on the activation values in the target network, the alarm network decides if the given sample is normal.

Semi-supervised Anomaly Detection Supervised Anomaly Detection

DLA: Dense-Layer-Analysis for Adversarial Example Detection

no code implementations5 Nov 2019 Philip Sperl, Ching-Yu Kao, Peng Chen, Konstantin Böttinger

In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model's performance.

Autonomous Driving General Classification

Deep Reinforcement Fuzzing

no code implementations14 Jan 2018 Konstantin Böttinger, Patrice Godefroid, Rishabh Singh

Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs.

Q-Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.