Search Results for author: Sanjay Kariyappa

Found 11 papers, 3 papers with code

SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features

no code implementations10 Jul 2023 Sanjay Kariyappa, Leonidas Tsepenekas, Freddy Lécué, Daniele Magazzeni

While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient.

Feature Importance Multi-Armed Bandits

Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

no code implementations21 Sep 2022 Kiwan Maeng, Chuan Guo, Sanjay Kariyappa, Edward Suh

Split learning and inference propose to run training/inference of a large model that is split across client devices and the cloud.

ExPLoit: Extracting Private Labels in Split Learning

no code implementations25 Nov 2021 Sanjay Kariyappa, Moinuddin K Qureshi

Split learning is a popular technique used for vertical federated learning (VFL), where the goal is to jointly train a model on the private input and label data held by two parties.

Image Classification Vertical Federated Learning

Enabling Inference Privacy with Adaptive Noise Injection

no code implementations6 Apr 2021 Sanjay Kariyappa, Ousmane Dia, Moinuddin K Qureshi

To this end, we propose Adaptive Noise Injection (ANI), which uses a light-weight DNN on the client-side to inject noise to each input, before transmitting it to the service provider to perform inference.

Defending Against Model Stealing Attacks with Adaptive Misinformation

1 code implementation CVPR 2020 Sanjay Kariyappa, Moinuddin K. Qureshi

Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access.

Misinformation

Improving Adversarial Robustness of Ensembles with Diversity Training

1 code implementation28 Jan 2019 Sanjay Kariyappa, Moinuddin K. Qureshi

Deep Neural Networks are vulnerable to adversarial attacks even in settings where the attacker has no direct access to the model being attacked.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.