Search Results for author: Amrita Roy Chowdhury

Found 9 papers, 3 papers with code

FairProof : Confidential and Certifiable Fairness for Neural Networks

1 code implementation19 Feb 2024 Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri

To this end, we propose \name -- a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality.

Fairness

Can Membership Inferencing be Refuted?

no code implementations7 Mar 2023 Zhifeng Kong, Amrita Roy Chowdhury, Kamalika Chaudhuri

Given a machine learning model, a data point and some auxiliary information, the goal of an MI attack is to determine whether the data point was used to train the model.

Privacy Implications of Shuffling

no code implementations ICLR 2022 Casey Meehan, Amrita Roy Chowdhury, Kamalika Chaudhuri, Somesh Jha

\ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the \textit{order} of the data.

A Shuffling Framework for Local Differential Privacy

no code implementations11 Jun 2021 Casey Meehan, Amrita Roy Chowdhury, Kamalika Chaudhuri, Somesh Jha

ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the order of the data.

Data Privacy in Trigger-Action Systems

1 code implementation10 Dec 2020 Yunang Chen, Amrita Roy Chowdhury, Ruizhe Wang, Andrei Sabelfeld, Rahul Chatterjee, Earlence Fernandes

Trigger-action platforms (TAPs) allow users to connect independent web-based or IoT services to achieve useful automation.

Cryptography and Security

ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks

no code implementations11 Nov 2020 Zhichuang Sun, Ruimin Sun, Changming Liu, Amrita Roy Chowdhury, Long Lu, Somesh Jha

ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators.

Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models

no code implementations ICML 2020 Amrita Roy Chowdhury, Theodoros Rekatsinas, Somesh Jha

Our solution optimizes for the utility of inference queries over the DGM and \textit{adds noise that is customized to the properties of the private input dataset and the graph structure of the DGM}.

Concise Explanations of Neural Networks using Adversarial Training

1 code implementation ICML 2020 Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu

Our first contribution is a theoretical exploration of how these two properties (when using attributions based on Integrated Gradients, or IG) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training.

Multi-class Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.