no code implementations • 13 Dec 2024 • Jinnan Guo, Kapil Vaswani, Andrew Paverd, Peter Pietzuch
While current solutions have explored the use of trusted execution environment (TEEs) to combat such attacks, there is a mismatch with the security needs of FL: TEEs offer confidentiality guarantees, which are unnecessary for FL and make them vulnerable to side-channel attacks, and focus on coarse-grained attestation, which does not capture the execution of FL training.
no code implementations • 4 Oct 2024 • Shoaib Ahmed Siddiqui, Radhika Gaonkar, Boris Köpf, David Krueger, Andrew Paverd, Ahmed Salem, Shruti Tople, Lukas Wutschitz, Menglin Xia, Santiago Zanella-Béguelin
In this paper, we propose a novel, more permissive approach to propagate information flow labels through LLM queries.
1 code implementation • 2 Jun 2024 • Sahar Abdelnabi, Aideen Fay, Giovanni Cherubin, Ahmed Salem, Mario Fritz, Andrew Paverd
We study LLM activations as a solution to detect task drift, showing that activation deltas - the difference in activations before and after processing external data - are strongly correlated with this phenomenon.
no code implementations • 22 Feb 2024 • Giovanni Cherubin, Boris Köpf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin
This paper presents a new approach to evaluate the privacy of machine learning models against specific record-level threats, such as membership and attribute inference, without the indirection through DP.
no code implementations • 12 Dec 2023 • Ahmed Salem, Andrew Paverd, Boris Köpf
This tool can also assist in generating datasets for jailbreak and prompt injection attacks, thus overcoming the scarcity of data in this domain.
no code implementations • 27 Nov 2023 • Lukas Wutschitz, Boris Köpf, Andrew Paverd, Saravan Rajmohan, Ahmed Salem, Shruti Tople, Santiago Zanella-Béguelin, Menglin Xia, Victor Rühle
In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows.
1 code implementation • 2 Feb 2023 • Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.
1 code implementation • 21 Dec 2022 • Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople, Santiago Zanella-Béguelin
Deploying machine learning models in production may allow adversaries to infer sensitive information about training data.
1 code implementation • 10 Jun 2022 • Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, Daniel Jones
Our Bayesian method exploits the hypothesis testing interpretation of differential privacy to obtain a posterior for $\varepsilon$ (not just a confidence interval) from the joint posterior of the false positive and false negative rates of membership inference attacks.
no code implementations • 1 Jan 2021 • Santiago Zanella-Beguelin, Shruti Tople, Andrew Paverd, Boris Köpf
This is true even for queries that are entirely in-distribution, making extraction attacks indistinguishable from legitimate use; (ii) with fine-tuned base layers, the effectiveness of algebraic attacks decreases with the learning rate, showing that fine-tuning is not only beneficial for accuracy but also indispensable for model confidentiality.
no code implementations • 17 Dec 2019 • Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt
To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.
1 code implementation • 25 Sep 2019 • Yoshimichi Nakatsuka, Andrew Paverd, Gene Tsudik
Security and privacy of the Internet Domain Name System (DNS) have been longstanding concerns.
Cryptography and Security
1 code implementation • 14 Oct 2018 • Fritz Alder, N. Asokan, Arseny Kurnikov, Andrew Paverd, Michael Steiner
A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations.
Cryptography and Security
1 code implementation • 20 Aug 2018 • Shohreh Hosseinzadeh, Hans Liljestrand, Ville Leppänen, Andrew Paverd
Intel Software Guard Extensions (SGX) is a promising hardware-based technology for protecting sensitive computations from potentially compromised system software.
Cryptography and Security
1 code implementation • 23 Apr 2018 • Arseny Kurnikov, Andrew Paverd, Mohammad Mannan, N. Asokan
Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices.
Cryptography and Security
no code implementations • 17 Oct 2017 • Elena Reshetova, Hans Liljestrand, Andrew Paverd, N. Asokan
The security of billions of devices worldwide depends on the security and robustness of the mainline Linux kernel.
Cryptography and Security Operating Systems
no code implementations • 24 Apr 2017 • Jorden Whitefield, Liqun Chen, Frank Kargl, Andrew Paverd, Steve Schneider, Helen Treharne, Stephan Wesemeyer
This paper focusses on the formal analysis of a particular element of security mechanisms for V2X found in many proposals: the revocation of malicious or misbehaving vehicles from the V2X system by invalidating their credentials.
Cryptography and Security D.2.4; D.4.6
1 code implementation • 25 May 2016 • Tigist Abera, N. Asokan, Lucas Davi, Jan-Erik Ekberg, Thomas Nyman, Andrew Paverd, Ahmad-Reza Sadeghi, Gene Tsudik
Remote attestation is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices.
Cryptography and Security