Search Results for author: Boris Köpf

Found 9 papers, 2 papers with code

Closed-Form Bounds for DP-SGD against Record-level Inference

no code implementations22 Feb 2024 Giovanni Cherubin, Boris Köpf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin

This paper presents a new approach to evaluate the privacy of machine learning models against specific record-level threats, such as membership and attribute inference, without the indirection through DP.

Attribute

Maatphor: Automated Variant Analysis for Prompt Injection Attacks

no code implementations12 Dec 2023 Ahmed Salem, Andrew Paverd, Boris Köpf

This tool can also assist in generating datasets for jailbreak and prompt injection attacks, thus overcoming the scarcity of data in this domain.

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

no code implementations27 Nov 2023 Lukas Wutschitz, Boris Köpf, Andrew Paverd, Saravan Rajmohan, Ahmed Salem, Shruti Tople, Santiago Zanella-Béguelin, Menglin Xia, Victor Rühle

In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows.

Retrieval

Bayesian Estimation of Differential Privacy

1 code implementation10 Jun 2022 Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, Daniel Jones

Our Bayesian method exploits the hypothesis testing interpretation of differential privacy to obtain a posterior for $\varepsilon$ (not just a confidence interval) from the joint posterior of the false positive and false negative rates of membership inference attacks.

Grey-box Extraction of Natural Language Models

no code implementations1 Jan 2021 Santiago Zanella-Beguelin, Shruti Tople, Andrew Paverd, Boris Köpf

This is true even for queries that are entirely in-distribution, making extraction attacks indistinguishable from legitimate use; (ii) with fine-tuned base layers, the effectiveness of algebraic attacks decreases with the learning rate, showing that fine-tuning is not only beneficial for accuracy but also indispensable for model confidentiality.

Model extraction

Analyzing Information Leakage of Updates to Natural Language Models

no code implementations17 Dec 2019 Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt

To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.

Language Modelling

Analyzing Privacy Loss in Updates of Natural Language Models

no code implementations25 Sep 2019 Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella-Béguelin

To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models.

Theory and Practice of Finding Eviction Sets

1 code implementation2 Oct 2018 Pepe Vila, Boris Köpf, José Francisco Morales

Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.