1 code implementation • 7 Dec 2023 • Vasisht Duddu, Sebastian Szyller, N. Asokan
We survey existing literature on unintended interactions, accommodating them within our framework.
1 code implementation • 18 Aug 2023 • Vasisht Duddu, Anudeep Das, Nora Khayata, Hossein Yalame, Thomas Schneider, N. Asokan
The success of machine learning (ML) has been accompanied by increased concerns about its trustworthiness.
no code implementations • 17 Apr 2023 • Asim Waheed, Vasisht Duddu, N. Asokan
In non-graph settings, fingerprinting models, or the data used to build them, have shown to be a promising approach toward ownership verification.
no code implementations • 18 Nov 2022 • Jan Aalmoes, Vasisht Duddu, Antoine Boutet
We are the first to demonstrate the alignment of group fairness with the specific privacy notion of attribute privacy in a blackbox setting.
1 code implementation • 21 Aug 2022 • Vasisht Duddu, Antoine Boutet
We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e. g., race and sex) given its model explanations.
no code implementations • 4 Feb 2022 • Jan Aalmoes, Vasisht Duddu, Antoine Boutet
This unpredictable effect of fairness mechanisms on the attribute privacy risk is an important limitation on their utilization which has to be accounted by the model builder.
no code implementations • 4 Dec 2021 • Vasisht Duddu, Sebastian Szyller, N. Asokan
Using ten benchmark datasets, we show that SHAPr is indeed effective in estimating susceptibility of training data records to MIAs.
no code implementations • 26 Apr 2021 • Sebastian Szyller, Vasisht Duddu, Tommi Gröndahl, N. Asokan
We present a framework for conducting such attacks, and show that an adversary can successfully extract functional surrogate models by querying $F_V$ using data from the same domain as the training data for $F_V$.
no code implementations • 2 Oct 2020 • Vasisht Duddu, Antoine Boutet, Virat Shejwalkar
We choose quantization as design choice for highly efficient and private models.
no code implementations • 31 Oct 2019 • Vasisht Duddu, D. Vijay Rao
While the attacks proposed in literature are empirical, there is a need for a theoretical framework to measure the information leaked under such extraction attacks.
no code implementations • 30 Oct 2019 • Vasisht Duddu, N. Rajesh Pillai, D. Vijay Rao, Valentina E. Balas
Specifically, this work studies the impact of the fault tolerance of the Neural Network on training the model by adding noise to the input (Adversarial Robustness) and noise to the gradients (Differential Privacy).
1 code implementation • 6 Jul 2019 • Vasisht Duddu, D. Vijay Rao, Valentina E. Balas
In the view of difference in functionality, a Neural Network is modelled as two separate networks, i. e, the Feature Extractor with unsupervised learning objective and the Classifier with a supervised learning objective.
no code implementations • 31 Dec 2018 • Vasisht Duddu, Debasis Samanta, D. Vijay Rao, Valentina E. Balas
Deep learning is gaining importance in many applications.
no code implementations • 30 Mar 2018 • Vasisht Duddu, Debasis Samanta, D. Vijay Rao
Anonymous networks have enabled secure and anonymous communication between the users and service providers while maintaining their anonymity and privacy.
Cryptography and Security Networking and Internet Architecture