no code implementations • 22 Feb 2024 • Lucas Fenaux, Florian Kerschbaum
We focus on the image classification domain and provide a theoretical framework to study adversary knowledge inspired by work in order theory.
1 code implementation • 30 Nov 2023 • Benjamin Schneider, Nils Lukas, Florian Kerschbaum
We demonstrate the effectiveness and robustness of our universal backdoor attacks by controlling models with up to 6, 000 classes while poisoning only 0. 15% of the training dataset.
no code implementations • 29 Sep 2023 • Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, Florian Kerschbaum
A core security property of watermarking is robustness, which states that an attacker can only evade detection by substantially degrading image quality.
no code implementations • 28 Aug 2023 • Clark Barrett, Brad Boyd, Elie Burzstein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang
However, GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
no code implementations • 21 Aug 2023 • Yutong Wu, Jie Zhang, Florian Kerschbaum, Tianwei Zhang
Users can easily download the word embedding from public websites like Civitai and add it to their own stable diffusion model without fine-tuning for personalization.
1 code implementation • 14 Jun 2023 • Abdulrahman Diaa, Lucas Fenaux, Thomas Humphries, Marian Dietz, Faezeh Ebrahimianghazani, Bailey Kacsmar, Xinda Li, Nils Lukas, Rasoul Akhavan Mahdavi, Simon Oya, Ehsan Amjadian, Florian Kerschbaum
Motivated by the success of previous work co-designing machine learning and MPC, we develop an activation function co-design.
no code implementations • 7 May 2023 • Nils Lukas, Florian Kerschbaum
Our research points to intrinsic flaws in current attack evaluation methods and raises the bar for all data poisoning attackers who must delicately balance this trade-off to remain robust and undetectable.
2 code implementations • 14 Apr 2023 • Nils Lukas, Florian Kerschbaum
We propose an adaptive attack that can successfully remove any watermarking with access to only 200 non-watermarked images.
1 code implementation • 19 Nov 2022 • Yihan Wu, Xinda Li, Florian Kerschbaum, Heng Huang, Hongyang Zhang
In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust.
no code implementations • Findings (NAACL) 2022 • Justus Mattern, Benjamin Weggenmann, Florian Kerschbaum
As the issues of privacy and trust are receiving increasing attention within the research community, various attempts have been made to anonymize textual data.
1 code implementation • 16 Apr 2022 • Daniel Bernau, Jonas Robl, Florian Kerschbaum
We present an approach to quantify and compare the privacy-accuracy trade-off for differentially private Variational Autoencoders.
no code implementations • 29 Sep 2021 • Nils Lukas, Charles Zhang, Florian Kerschbaum
Feature Grinding requires at most six percent of the model's training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors.
1 code implementation • 11 Aug 2021 • Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum
Watermarking should be robust against watermark removal attacks that derive a surrogate model that evades provenance verification.
2 code implementations • 4 Mar 2021 • Daniel Bernau, Günther Eibl, Philip W. Grassal, Hannah Keller, Florian Kerschbaum
We transform $(\epsilon,\delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset.
1 code implementation • 23 Oct 2020 • Thomas Humphries, Simon Oya, Lindsey Tulloch, Matthew Rafuse, Ian Goldberg, Urs Hengartner, Florian Kerschbaum
Our results reveal that training set dependencies can severely increase the performance of MIAs, and therefore assuming that data samples are statistically independent can significantly underestimate the performance of MIAs.
1 code implementation • 24 Dec 2019 • Daniel Bernau, Philip-William Grassal, Jonas Robl, Florian Kerschbaum
We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs.
1 code implementation • ICLR 2021 • Nils Lukas, Yuxuan Zhang, Florian Kerschbaum
We propose a fingerprinting method for deep neural network classifiers that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs.
1 code implementation • 31 Oct 2019 • Tianhao Wang, Florian Kerschbaum
White-box watermarking algorithms have the advantage that they do not impact the accuracy of the watermarked model.
no code implementations • 18 Jun 2019 • Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Xinda Li, Florian Kerschbaum
We focus on backdoor-based watermarking and propose two -- a black-box and a white-box -- attacks that remove the watermark.
no code implementations • 2 May 2018 • Benjamin Weggenmann, Florian Kerschbaum
Text mining and information retrieval techniques have been developed to assist us with analyzing, organizing and retrieving documents with the help of computers.
no code implementations • 14 Mar 2017 • Benny Fuhry, Raad Bahmani, Ferdinand Brasser, Florian Hahn, Florian Kerschbaum, Ahmad-Reza Sadeghi
Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance.
Cryptography and Security