no code implementations • 22 Apr 2024 • Jonas Ricker, Dennis Assenmacher, Thorsten Holz, Asja Fischer, Erwin Quiring
Recent advances in the field of generative artificial intelligence (AI) have blurred the lines between authentic and machine-generated content, making it almost impossible for humans to distinguish between such media.
1 code implementation • 27 Mar 2024 • Andreas Müller, Erwin Quiring
We empirically examine our findings in a comprehensive evaluation with multiple image classification models and show that our attack achieves the same sparsity effect as prior sponge-example methods, but at a fraction of computation effort.
1 code implementation • 23 Oct 2023 • Erwin Quiring, Andreas Müller, Konrad Rieck
Unfortunately, this preprocessing step is vulnerable to so-called image-scaling attacks where an attacker makes unnoticeable changes to an image so that it becomes a new image after scaling.
1 code implementation • 25 Mar 2023 • Thorsten Eisenhofer, Erwin Quiring, Jonas Möller, Doreen Riepel, Thorsten Holz, Konrad Rieck
In this paper, we show that this automation can be manipulated using adversarial learning.
1 code implementation • 26 Aug 2022 • Micha Horlboge, Erwin Quiring, Roland Meyer, Konrad Rieck
We prove that the task of generating a $k$-anonymous program -- a program that cannot be attributed to one of $k$ authors -- is not computable in the general case.
1 code implementation • 25 May 2022 • Vera Wesselkamp, Konrad Rieck, Daniel Arp, Erwin Quiring
In particular, we show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
1 code implementation • 19 Oct 2020 • Erwin Quiring, Lukas Pirch, Michael Reimsbach, Daniel Arp, Konrad Rieck
Consequently, adversaries will also target the learning system and use evasion attacks to bypass the detection of malware.
no code implementations • 19 Oct 2020 • Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, Konrad Rieck
With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas.
no code implementations • 19 Mar 2020 • Erwin Quiring, Konrad Rieck
By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning.
1 code implementation • 29 May 2019 • Erwin Quiring, Alwin Maier, Konrad Rieck
In this paper, we present a novel attack against authorship attribution of source code.
no code implementations • 16 Mar 2017 • Erwin Quiring, Daniel Arp, Konrad Rieck
This problem has motivated the research field of adversarial machine learning that is concerned with attacking and defending learning methods.