Search Results for author: Lukas Struppek

Found 17 papers, 13 papers with code

CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI

no code implementations29 Feb 2024 Domenique Zipperling, Simeon Allmendinger, Lukas Struppek, Niklas Kühl

Tailored for efficient and collaborative use of denoising diffusion probabilistic models, CollaFuse enables shared server training and inference, alleviating client computational burdens.

Autonomous Driving Denoising +2

Exploring the Adversarial Capabilities of Large Language Models

no code implementations14 Feb 2024 Lukas Struppek, Minh Hieu Le, Dominik Hintersdorf, Kristian Kersting

The proliferation of large language models (LLMs) has sparked widespread and general interest due to their strong language generation capabilities, offering great potential for both industry and research.

Hate Speech Detection Text Generation

Defending Our Privacy With Backdoors

1 code implementation12 Oct 2023 Dominik Hintersdorf, Lukas Struppek, Daniel Neider, Kristian Kersting

We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names and faces of individuals from vision-language models by fine-tuning them for only a few minutes instead of re-training them from scratch.

Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks

1 code implementation10 Oct 2023 Lukas Struppek, Dominik Hintersdorf, Kristian Kersting

Label smoothing -- using softened labels instead of hard ones -- is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration.

Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data

1 code implementation10 Oct 2023 Lukas Struppek, Martin B. Hentschel, Clifton Poth, Dominik Hintersdorf, Kristian Kersting

To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models.

Knowledge Distillation

Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models

no code implementations18 Aug 2023 Dominik Hintersdorf, Lukas Struppek, Kristian Kersting

The field of artificial intelligence (AI) has experienced remarkable progress in recent years, driven by the widespread adoption of open-source machine learning models in both research and industry.

Self-Driving Cars

Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations

1 code implementation16 Mar 2023 Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting

Neural network-based image classifiers are powerful tools for computer vision tasks, but they inadvertently reveal sensitive attribute information about their classes, raising concerns about their privacy.

Attribute Face Recognition +2

Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness

1 code implementation7 Feb 2023 Felix Friedrich, Manuel Brack, Lukas Struppek, Dominik Hintersdorf, Patrick Schramowski, Sasha Luccioni, Kristian Kersting

Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications.

Fairness Text-to-Image Generation

Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis

3 code implementations ICCV 2023 Lukas Struppek, Dominik Hintersdorf, Kristian Kersting

We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk.

Image Generation

Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis

2 code implementations19 Sep 2022 Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting

Models for text-to-image synthesis, such as DALL-E~2 and Stable Diffusion, have recently drawn a lot of interest from academia and the general public.

Image Generation

Does CLIP Know My Face?

2 code implementations15 Sep 2022 Dominik Hintersdorf, Lukas Struppek, Manuel Brack, Felix Friedrich, Patrick Schramowski, Kristian Kersting

Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy.

Inference Attack

Combining AI and AM - Improving Approximate Matching through Transformer Networks

1 code implementation24 Aug 2022 Frieder Uhlig, Lukas Struppek, Dominik Hintersdorf, Thomas Göbel, Harald Baier, Kristian Kersting

Then DLAM is able to detect the patterns in a typically much larger file, that is DLAM focuses on the use case of fragment detection.

Anomaly Detection

Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability

no code implementations22 Apr 2022 Svetlana Pavlitska, Christian Hubschneider, Lukas Struppek, J. Marius Zöllner

In this work, we apply sparse MoE layers to CNNs for computer vision tasks and analyze the resulting effect on model interpretability.

Image Classification Language Modelling +2

Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks

3 code implementations28 Jan 2022 Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, Kristian Kersting

Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge.

To Trust or Not To Trust Prediction Scores for Membership Inference Attacks

2 code implementations17 Nov 2021 Dominik Hintersdorf, Lukas Struppek, Kristian Kersting

Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model.

Cannot find the paper you are looking for? You can Submit a new open access paper.