Search Results for author: Florian Kerschbaum

Found 22 papers, 11 papers with code

FastLloyd: Federated, Accurate, Secure, and Tunable $k$-Means Clustering with Differential Privacy

no code implementations3 May 2024 Abdulrahman Diaa, Thomas Humphries, Florian Kerschbaum

By utilizing the computational DP model, we design a lightweight, secure aggregation-based approach that achieves four orders of magnitude speed-up over state-of-the-art related work.

Constrained Clustering Privacy Preserving

SoK: Analyzing Adversarial Examples: A Framework to Study Adversary Knowledge

no code implementations22 Feb 2024 Lucas Fenaux, Florian Kerschbaum

We focus on the image classification domain and provide a theoretical framework to study adversary knowledge inspired by work in order theory.

Image Classification

Universal Backdoor Attacks

1 code implementation30 Nov 2023 Benjamin Schneider, Nils Lukas, Florian Kerschbaum

We demonstrate the effectiveness and robustness of our universal backdoor attacks by controlling models with up to 6, 000 classes while poisoning only 0. 15% of the training dataset.

Data Poisoning

Leveraging Optimization for Adaptive Attacks on Image Watermarks

no code implementations29 Sep 2023 Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, Florian Kerschbaum

A core security property of watermarking is robustness, which states that an attacker can only evade detection by substantially degrading image quality.

Backdooring Textual Inversion for Concept Censorship

no code implementations21 Aug 2023 Yutong Wu, Jie Zhang, Florian Kerschbaum, Tianwei Zhang

Users can easily download the word embedding from public websites like Civitai and add it to their own stable diffusion model without fine-tuning for personalization.

Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks

no code implementations7 May 2023 Nils Lukas, Florian Kerschbaum

Our research points to intrinsic flaws in current attack evaluation methods and raises the bar for all data poisoning attackers who must delicately balance this trade-off to remain robust and undetectable.

Data Poisoning Image Classification

PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators

2 code implementations14 Apr 2023 Nils Lukas, Florian Kerschbaum

We propose an adaptive attack that can successfully remove any watermarking with access to only 200 non-watermarked images.

DeepFake Detection Face Swapping

Towards Robust Dataset Learning

1 code implementation19 Nov 2022 Yihan Wu, Xinda Li, Florian Kerschbaum, Heng Huang, Hongyang Zhang

In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust.

The Limits of Word Level Differential Privacy

no code implementations Findings (NAACL) 2022 Justus Mattern, Benjamin Weggenmann, Florian Kerschbaum

As the issues of privacy and trust are receiving increasing attention within the research community, various attempts have been made to anonymize textual data.

Sentence Text Anonymization +1

Assessing Differentially Private Variational Autoencoders under Membership Inference

1 code implementation16 Apr 2022 Daniel Bernau, Jonas Robl, Florian Kerschbaum

We present an approach to quantify and compare the privacy-accuracy trade-off for differentially private Variational Autoencoders.

Time Series Time Series Analysis

Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks

no code implementations29 Sep 2021 Nils Lukas, Charles Zhang, Florian Kerschbaum

Feature Grinding requires at most six percent of the model's training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors.

Backdoor Attack

SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)

1 code implementation11 Aug 2021 Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum

Watermarking should be robust against watermark removal attacks that derive a surrogate model that evades provenance verification.

Image Classification

Quantifying identifiability to choose and audit $ε$ in differentially private deep learning

2 code implementations4 Mar 2021 Daniel Bernau, Günther Eibl, Philip W. Grassal, Hannah Keller, Florian Kerschbaum

We transform $(\epsilon,\delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset.

BIG-bench Machine Learning Inference Attack

Investigating Membership Inference Attacks under Data Dependencies

1 code implementation23 Oct 2020 Thomas Humphries, Simon Oya, Lindsey Tulloch, Matthew Rafuse, Ian Goldberg, Urs Hengartner, Florian Kerschbaum

Our results reveal that training set dependencies can severely increase the performance of MIAs, and therefore assuming that data samples are statistically independent can significantly underestimate the performance of MIAs.

BIG-bench Machine Learning Inference Attack +1

Assessing differentially private deep learning with Membership Inference

1 code implementation24 Dec 2019 Daniel Bernau, Philip-William Grassal, Jonas Robl, Florian Kerschbaum

We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs.

Inference Attack Membership Inference Attack

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

1 code implementation ICLR 2021 Nils Lukas, Yuxuan Zhang, Florian Kerschbaum

We propose a fingerprinting method for deep neural network classifiers that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs.

Model extraction Transfer Learning

RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks

1 code implementation31 Oct 2019 Tianhao Wang, Florian Kerschbaum

White-box watermarking algorithms have the advantage that they do not impact the accuracy of the watermarked model.

Inference Attack

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

no code implementations18 Jun 2019 Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Xinda Li, Florian Kerschbaum

We focus on backdoor-based watermarking and propose two -- a black-box and a white-box -- attacks that remove the watermark.

SynTF: Synthetic and Differentially Private Term Frequency Vectors for Privacy-Preserving Text Mining

no code implementations2 May 2018 Benjamin Weggenmann, Florian Kerschbaum

Text mining and information retrieval techniques have been developed to assist us with analyzing, organizing and retrieving documents with the help of computers.

Authorship Attribution Information Retrieval +5

HardIDX: Practical and Secure Index with SGX

no code implementations14 Mar 2017 Benny Fuhry, Raad Bahmani, Ferdinand Brasser, Florian Hahn, Florian Kerschbaum, Ahmad-Reza Sadeghi

Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.