Search Results for author: Franziska Boenisch

Found 16 papers, 5 papers with code

Regulation Games for Trustworthy Machine Learning

no code implementations5 Feb 2024 Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot

Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy.

Fairness Gender Classification

Personalized Differential Privacy for Ridge Regression

1 code implementation30 Jan 2024 Krishna Acharya, Franziska Boenisch, Rakshit Naidu, Juba Ziani

DP requires to specify a uniform privacy level $\varepsilon$ that expresses the maximum privacy loss that each data point in the entire dataset is willing to tolerate.

regression

Memorization in Self-Supervised Learning Improves Downstream Generalization

1 code implementation19 Jan 2024 Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch

Our definition compares the difference in alignment of representations for data points and their augmented views returned by both encoders that were trained on these data points and encoders that were not.

Memorization Self-Supervised Learning

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

no code implementations14 Jun 2023 Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot

Differential privacy and randomized smoothing are effective defenses that provide certifiable guarantees for each of these threats, however, it is not well understood how implementing either defense impacts the other.

Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation

no code implementations9 Jan 2023 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.

Federated Learning

Introducing Model Inversion Attacks on Automatic Speaker Recognition

no code implementations9 Jan 2023 Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger

To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.

Speaker Recognition

Dataset Inference for Self-Supervised Models

no code implementations16 Sep 2022 Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot

We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing.

Attribute Density Estimation

Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees

no code implementations21 Feb 2022 Franziska Boenisch, Christopher Mühl, Roy Rinberg, Jannis Ihrig, Adam Dziedzic

Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP).

BIG-bench Machine Learning

When the Curious Abandon Honesty: Federated Learning Is Not Private

1 code implementation6 Dec 2021 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

Federated Learning Privacy Preserving +1

A Systematic Review on Model Watermarking for Neural Networks

no code implementations25 Sep 2020 Franziska Boenisch

Machine learning (ML) models are applied in an increasing variety of domains.

Testing Robustness Against Unforeseen Adversaries

3 code implementations21 Aug 2019 Max Kaufmann, Daniel Kang, Yi Sun, Steven Basart, Xuwang Yin, Mantas Mazeika, Akul Arora, Adam Dziedzic, Franziska Boenisch, Tom Brown, Jacob Steinhardt, Dan Hendrycks

To narrow in on this discrepancy between research and reality we introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries, including eighteen new non-L_p attacks.

Adversarial Defense Adversarial Robustness

Tracking all members of a honey bee colony over their lifetime

1 code implementation9 Feb 2018 Franziska Boenisch, Benjamin Rosemann, Benjamin Wild, Fernando Wario, David Dormagen, Tim Landgraf

Computational approaches to the analysis of collective behavior in social insects increasingly rely on motion paths as an intermediate data layer from which one can infer individual behaviors or social interactions.

Cannot find the paper you are looking for? You can Submit a new open access paper.