Search Results for author: Lea Schönherr

Found 21 papers, 13 papers with code

Buffer-free Class-Incremental Learning with Out-of-Distribution Detection

no code implementations29 May 2025 Srishti Gupta, Daniele Angioni, Maura Pintor, Ambra Demontis, Lea Schönherr, Battista Biggio, Fabio Roli

In this paper, we present an in-depth analysis of post-hoc OOD detection methods and investigate their potential to eliminate the need for a memory buffer.

class-incremental learning Class Incremental Learning +3

Security Benefits and Side Effects of Labeling AI-Generated Images

no code implementations28 May 2025 Sandra Höltervennhoff, Jonas Ricker, Maike M. Raphael, Charlotte Schwedes, Rebecca Weil, Asja Fischer, Thorsten Holz, Lea Schönherr, Sascha Fahl

Assuming that simplicity, transparency, and trust are likely to impact the successful adoption of such labels, we first qualitatively explore users' opinions and expectations of AI labeling using five focus groups.

Misinformation

Rethinking Robustness in Machine Learning: A Posterior Agreement Approach

1 code implementation20 Mar 2025 João Borges S. Carvalho, Alessandro Torcinovich, Victor Jimenez Rodriguez, Antonio E. Cinà, Carlos Cotrini, Lea Schönherr, Joachim M. Buhmann

The robustness of algorithms against covariate shifts is a fundamental problem with critical implications for the deployment of machine learning algorithms in the real world.

Domain Generalization

Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors

1 code implementation2 Oct 2024 Sina Mavali, Jonas Ricker, David Pape, Yash Sharma, Asja Fischer, Lea Schönherr

While generative AI (GenAI) offers countless possibilities for creative and productive tasks, artificially generated media can be misused for fraud, manipulation, scams, misinformation campaigns, and more.

Adversarial Robustness Misinformation

Prompt Obfuscation for Large Language Models

no code implementations17 Sep 2024 David Pape, Sina Mavali, Thorsten Eisenhofer, Lea Schönherr

Overall, we demonstrate that prompt obfuscation is an effective mechanism to safeguard the intellectual property of a system prompt while maintaining the same utility as the original prompt.

Large Language Model Semantic Similarity +1

HexaCoder: Secure Code Generation via Oracle-Guided Synthetic Training Data

1 code implementation10 Sep 2024 Hossein Hajipour, Lea Schönherr, Thorsten Holz, Mario Fritz

The data synthesis pipeline generates pairs of vulnerable and fixed codes for specific Common Weakness Enumeration (CWE) types by utilizing a state-of-the-art LLM for repairing vulnerable code.

Code Generation

Whispers in the Machine: Confidentiality in LLM-integrated Systems

1 code implementation10 Feb 2024 Jonathan Evertz, Merlin Chlosta, Lea Schönherr, Thorsten Eisenhofer

Large Language Models (LLMs) are increasingly augmented with external tools and commercial services into LLM-integrated systems.

A Representative Study on Human Detection of Artificially Generated Media Across Countries

1 code implementation10 Dec 2023 Joel Frank, Franziska Herbert, Jonas Ricker, Lea Schönherr, Thorsten Eisenhofer, Asja Fischer, Markus Dürmuth, Thorsten Holz

To further understand which factors influence people's ability to detect generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research.

Face Swapping Human Detection

Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation

2 code implementations29 Sep 2023 Sahar Abdelnabi, Amr Gomaa, Sarath Sivaprasad, Lea Schönherr, Mario Fritz

The fundamental task of negotiation spans many key features of communication, such as cooperation, competition, and manipulation potentials.

Decision Making

On the Limitations of Model Stealing with Uncertainty Quantification Models

no code implementations9 May 2023 David Pape, Sina Däubener, Thorsten Eisenhofer, Antonio Emanuele Cinà, Lea Schönherr

We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.

Diversity Uncertainty Quantification

Dompteur: Taming Audio Adversarial Examples

1 code implementation10 Feb 2021 Thorsten Eisenhofer, Lea Schönherr, Joel Frank, Lars Speckemeier, Dorothea Kolossa, Thorsten Holz

In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

VenoMave: Targeted Poisoning Against Speech Recognition

1 code implementation21 Oct 2020 Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna

In a more realistic scenario, when the target audio waveform is played over the air in different rooms, VENOMAVE maintains a success rate of up to 73. 3%.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

1 code implementation24 May 2020 Sina Däubener, Lea Schönherr, Asja Fischer, Dorothea Kolossa

The neural networks for uncertainty quantification simultaneously diminish the vulnerability to the attack, which is reflected in a lower recognition accuracy of the malicious target text in comparison to a standard hybrid ASR system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Leveraging Frequency Analysis for Deep Fake Image Recognition

1 code implementation ICML 2020 Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz

Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.

Image Forensics

Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems

no code implementations5 Aug 2019 Lea Schönherr, Thorsten Eisenhofer, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa

In this paper, we demonstrate the first algorithm that produces generic adversarial examples, which remain robust in an over-the-air attack that is not adapted to the specific environment.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding

no code implementations16 Aug 2018 Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa

We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i. e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception.

Cryptography and Security Sound Audio and Speech Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.