1 code implementation • 7 Dec 2023 • Vasisht Duddu, Sebastian Szyller, N. Asokan
We survey existing literature on unintended interactions, accommodating them within our framework.
1 code implementation • 18 Aug 2023 • Vasisht Duddu, Anudeep Das, Nora Khayata, Hossein Yalame, Thomas Schneider, N. Asokan
The success of machine learning (ML) has been accompanied by increased concerns about its trustworthiness.
1 code implementation • 27 Jul 2023 • Buse G. A. Tekgul, N. Asokan
We first show that it is possible to find non-transferable, universal adversarial masks, i. e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies.
no code implementations • 17 Apr 2023 • Asim Waheed, Vasisht Duddu, N. Asokan
In non-graph settings, fingerprinting models, or the data used to build them, have shown to be a promising approach toward ownership verification.
1 code implementation • 13 Apr 2023 • Jian Liu, Rui Zhang, Sebastian Szyller, Kui Ren, N. Asokan
Our core idea is that a malicious accuser can deviate (without detection) from the specified MOR process by finding (transferable) adversarial examples that successfully serve as evidence against independent suspect models.
no code implementations • 24 Oct 2022 • Sebastian Szyller, Rui Zhang, Jian Liu, N. Asokan
However, in a subspace of the same setting, we prove that DI suffers from high false positives (FPs) -- it can incorrectly identify an independent model trained with non-overlapping data from the same distribution as stolen.
1 code implementation • 5 Jul 2022 • Sebastian Szyller, N. Asokan
We then focus on systematically analyzing pairwise interactions between protection mechanisms for one concern, model and data ownership verification, with two other classes of ML protection mechanisms: differentially private training, and robustness against model evasion.
1 code implementation • 25 Feb 2022 • Buse Gul Atli Tekgul, N. Asokan
We show that radioactive data can effectively survive model extraction attacks, which raises the possibility that it can be used for ML model ownership verification robust against model extraction.
no code implementations • 19 Feb 2022 • Tommi Gröndahl, Yujia Guo, N. Asokan
To facilitate this, we experiment on four sequence modelling tasks on the T5 Transformer in two experiment settings: zero-shot generalization, and generalization across class-specific vocabularies flipped between the training and test set.
no code implementations • 4 Dec 2021 • Vasisht Duddu, Sebastian Szyller, N. Asokan
Using ten benchmark datasets, we show that SHAPr is indeed effective in estimating susceptibility of training data records to MIAs.
1 code implementation • 16 Jun 2021 • Buse G. A. Tekgul, Shelly Wang, Samuel Marchal, N. Asokan
Via an extensive evaluation using three Atari 2600 games, we show that our attacks are effective, as they fully degrade the performance of three different DRL agents (up to 100%, even when the $l_\infty$ bound on the perturbation is as small as 0. 01).
no code implementations • 26 Apr 2021 • Sebastian Szyller, Vasisht Duddu, Tommi Gröndahl, N. Asokan
We present a framework for conducting such attacks, and show that an adversary can successfully extract functional surrogate models by querying $F_V$ using data from the same domain as the training data for $F_V$.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Mika Juuti, Tommi Gröndahl, Adrian Flanagan, N. Asokan
Detection of some types of toxic language is hampered by extreme scarcity of labeled training data.
1 code implementation • 17 Aug 2020 • Buse Gul Atli, Yuxi Xia, Samuel Marchal, N. Asokan
In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning.
no code implementations • 11 Oct 2019 • Buse Gul Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal, N. Asokan
However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API.
no code implementations • 8 Jun 2019 • Mika Juuti, Buse Gul Atli, N. Asokan
We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting.
1 code implementation • 3 Jun 2019 • Sebastian Szyller, Buse Gul Atli, Samuel Marchal, N. Asokan
Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model.
no code implementations • 31 May 2019 • Tommi Gröndahl, N. Asokan
Finally, we highlight a critical problem that afflicts all current style transfer techniques: the adversary can use the same technique for thwarting style transfer via adversarial training.
no code implementations • 24 May 2019 • Hans Liljestrand, Thomas Nyman, Lachlan J. Gunn, Jan-Erik Ekberg, N. Asokan
Software shadow stacks incur high overheads or trade off security for efficiency.
Cryptography and Security
no code implementations • 24 Feb 2019 • Tommi Gröndahl, N. Asokan
Textual deception constitutes a major problem for online security.
1 code implementation • 14 Oct 2018 • Fritz Alder, N. Asokan, Arseny Kurnikov, Andrew Paverd, Michael Steiner
A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations.
Cryptography and Security
no code implementations • 28 Aug 2018 • Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, N. Asokan
With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem.
1 code implementation • 7 May 2018 • Mika Juuti, Bo Sun, Tatsuya Mori, N. Asokan
Automatically generated fake restaurant reviews are a threat to online review systems.
2 code implementations • 7 May 2018 • Mika Juuti, Sebastian Szyller, Samuel Marchal, N. Asokan
Access to the model can be restricted to be only via well-defined prediction APIs.
Cryptography and Security
1 code implementation • 23 Apr 2018 • Arseny Kurnikov, Andrew Paverd, Mohammad Mannan, N. Asokan
Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices.
Cryptography and Security
no code implementations • 20 Apr 2018 • Thien Duc Nguyen, Samuel Marchal, Markus Miettinen, N. Asokan, Ahmad-Reza Sadeghi
Consequently, DIoT can cope with the emergence of new device types as well as new attacks.
Cryptography and Security
no code implementations • 17 Oct 2017 • Elena Reshetova, Hans Liljestrand, Andrew Paverd, N. Asokan
The security of billions of devices worldwide depends on the security and robustness of the mainline Linux kernel.
Cryptography and Security Operating Systems
2 code implementations • 15 Nov 2016 • Markus Miettinen, Samuel Marchal, Ibbad Hafeez, N. Asokan, Ahmad-Reza Sadeghi, Sasu Tarkoma
In this paper, we present IOT SENTINEL, a system capable of automatically identifying the types of devices being connected to an IoT network and enabling enforcement of rules for constraining the communications of vulnerable devices so as to minimize damage resulting from their compromise.
Cryptography and Security
1 code implementation • 25 May 2016 • Tigist Abera, N. Asokan, Lucas Davi, Jan-Erik Ekberg, Thomas Nyman, Andrew Paverd, Ahmad-Reza Sadeghi, Gene Tsudik
Remote attestation is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices.
Cryptography and Security
no code implementations • 3 Nov 2015 • Babins Shrestha, Nitesh Saxena, Hien Thi Thu Truong, N. Asokan
Contextual proximity detection (or, co-presence detection) is a promising approach to defend against relay attacks in many mobile authentication systems.
Cryptography and Security
1 code implementation • 26 Oct 2015 • Altaf Shaik, Ravishankar Borgaonkar, N. Asokan, Valtteri Niemi, Jean-Pierre Seifert
We carefully analyzed LTE access network protocol specifications and uncovered several vulnerabilities.
Cryptography and Security