Search Results for author: Peter Lorenz

Found 8 papers, 6 papers with code

Adversarial Examples are Misaligned in Diffusion Model Manifolds

no code implementations12 Jan 2024 Peter Lorenz, Ricard Durall, Janis Keuper

In recent years, diffusion models (DMs) have drawn significant attention for their success in approximating data distributions, yielding state-of-the-art generative results.

Adversarial Robustness Image Inpainting

Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality

no code implementations5 Jul 2023 Peter Lorenz, Ricard Durall, Janis Keuper

Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images.

DeepFake Detection

Visual Prompting for Adversarial Robustness

2 code implementations12 Oct 2022 Aochuan Chen, Peter Lorenz, Yuguang Yao, Pin-Yu Chen, Sijia Liu

In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at testing time.

Adversarial Defense Adversarial Robustness +1

Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?

2 code implementations AAAI Workshop AdvML 2022 Peter Lorenz, Dominik Strassel, Margret Keuper, Janis Keuper

In its most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l-inf perturbations limited to eps = 8/255.

Adversarial Attack Detection Adversarial Robustness +1

Detecting AutoAttack Perturbations in the Frequency Domain

2 code implementations ICML Workshop AML 2021 Peter Lorenz, Paula Harder, Dominik Strassel, Margret Keuper, Janis Keuper

Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.