no code implementations • 29 Feb 2024 • Domenique Zipperling, Simeon Allmendinger, Lukas Struppek, Niklas Kühl
Tailored for efficient and collaborative use of denoising diffusion probabilistic models, CollaFuse enables shared server training and inference, alleviating client computational burdens.
no code implementations • 14 Feb 2024 • Lukas Struppek, Minh Hieu Le, Dominik Hintersdorf, Kristian Kersting
The proliferation of large language models (LLMs) has sparked widespread and general interest due to their strong language generation capabilities, offering great potential for both industry and research.
1 code implementation • 12 Oct 2023 • Dominik Hintersdorf, Lukas Struppek, Daniel Neider, Kristian Kersting
We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names and faces of individuals from vision-language models by fine-tuning them for only a few minutes instead of re-training them from scratch.
1 code implementation • 10 Oct 2023 • Lukas Struppek, Dominik Hintersdorf, Kristian Kersting
Label smoothing -- using softened labels instead of hard ones -- is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration.
1 code implementation • 10 Oct 2023 • Lukas Struppek, Martin B. Hentschel, Clifton Poth, Dominik Hintersdorf, Kristian Kersting
To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models.
no code implementations • 18 Aug 2023 • Dominik Hintersdorf, Lukas Struppek, Kristian Kersting
The field of artificial intelligence (AI) has experienced remarkable progress in recent years, driven by the widespread adoption of open-source machine learning models in both research and industry.
1 code implementation • 16 Mar 2023 • Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting
Neural network-based image classifiers are powerful tools for computer vision tasks, but they inadvertently reveal sensitive attribute information about their classes, raising concerns about their privacy.
1 code implementation • 7 Feb 2023 • Felix Friedrich, Manuel Brack, Lukas Struppek, Dominik Hintersdorf, Patrick Schramowski, Sasha Luccioni, Kristian Kersting
Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications.
1 code implementation • NeurIPS 2023 • Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, Kristian Kersting
This leaves the user with little semantic control.
3 code implementations • ICCV 2023 • Lukas Struppek, Dominik Hintersdorf, Kristian Kersting
We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk.
2 code implementations • 19 Sep 2022 • Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting
Models for text-to-image synthesis, such as DALL-E~2 and Stable Diffusion, have recently drawn a lot of interest from academia and the general public.
2 code implementations • 15 Sep 2022 • Dominik Hintersdorf, Lukas Struppek, Manuel Brack, Felix Friedrich, Patrick Schramowski, Kristian Kersting
Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy.
1 code implementation • 24 Aug 2022 • Frieder Uhlig, Lukas Struppek, Dominik Hintersdorf, Thomas Göbel, Harald Baier, Kristian Kersting
Then DLAM is able to detect the patterns in a typically much larger file, that is DLAM focuses on the use case of fragment detection.
no code implementations • 22 Apr 2022 • Svetlana Pavlitska, Christian Hubschneider, Lukas Struppek, J. Marius Zöllner
In this work, we apply sparse MoE layers to CNNs for computer vision tasks and analyze the resulting effect on model interpretability.
3 code implementations • 28 Jan 2022 • Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, Kristian Kersting
Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge.
2 code implementations • 17 Nov 2021 • Dominik Hintersdorf, Lukas Struppek, Kristian Kersting
Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model.
1 code implementation • 12 Nov 2021 • Lukas Struppek, Dominik Hintersdorf, Daniel Neider, Kristian Kersting
Specifically, we show that current deep perceptual hashing may not be robust.