Search Results for author: Sahib Singh

Found 8 papers, 5 papers with code

WeightScale: Interpreting Weight Change in Neural Networks

no code implementations7 Jul 2021 Ayush Manish Agrawal, Atharva Tendle, Harshvardhan Sikka, Sahib Singh

Interpreting the learning dynamics of neural networks can provide useful insights into how networks learn and the development of better training and design approaches.

Dimensionality Reduction

Why is Pruning at Initialization Immune to Reinitializing and Shuffling?

no code implementations5 Jul 2021 Sahib Singh, Rosanne Liu

Recent studies assessing the efficacy of pruning neural networks methods uncovered a surprising finding: when conducting ablation studies on existing pruning-at-initialization methods, namely SNIP, GraSP, SynFlow, and magnitude pruning, performances of these methods remain unchanged and sometimes even improve when randomly shuffling the mask positions within each layer (Layerwise Shuffling) or sampling new initial weight values (Reinit), while keeping pruning masks the same.

Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples

no code implementations1 Jul 2021 Nelson Manohar-Alers, Ryan Feng, Sahib Singh, Jiguo Song, Atul Prakash

We present DeClaW, a system for detecting, classifying, and warning of adversarial inputs presented to a classification neural network.

Adversarial Attack Detection

Benchmarking Differential Privacy and Federated Learning for BERT Models

2 code implementations26 Jun 2021 Priyam Basu, Tiasa Singha Roy, Rakshit Naidu, Zumrut Muftuoglu, Sahib Singh, FatemehSadat Mireshghallah

Natural Language Processing (NLP) techniques can be applied to help with the diagnosis of medical conditions such as depression, using a collection of a person's utterances.

Federated Learning

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

1 code implementation22 Jun 2021 Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, FatemehSadat Mireshghallah, Andrew Trask

Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones.


Investigating Learning in Deep Neural Networks using Layer-Wise Weight Change

2 code implementations13 Nov 2020 Ayush Manish Agrawal, Atharva Tendle, Harshvardhan Sikka, Sahib Singh, Amr Kayid

Understanding the per-layer learning dynamics of deep neural networks is of significant interest as it may provide insights into how neural networks learn and the potential for better training regimens.

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

2 code implementations10 Sep 2020 Tom Farrand, FatemehSadat Mireshghallah, Sahib Singh, Andrew Trask

Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute.


Benchmarking Differentially Private Residual Networks for Medical Imagery

1 code implementation27 May 2020 Sahib Singh, Harshvardhan Sikka, Sasikanth Kotti, Andrew Trask

In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging.

Cannot find the paper you are looking for? You can Submit a new open access paper.