Interpreting the learning dynamics of neural networks can provide useful insights into how networks learn and the development of better training and design approaches.
Recent studies assessing the efficacy of pruning neural networks methods uncovered a surprising finding: when conducting ablation studies on existing pruning-at-initialization methods, namely SNIP, GraSP, SynFlow, and magnitude pruning, performances of these methods remain unchanged and sometimes even improve when randomly shuffling the mask positions within each layer (Layerwise Shuffling) or sampling new initial weight values (Reinit), while keeping pruning masks the same.
We present DeClaW, a system for detecting, classifying, and warning of adversarial inputs presented to a classification neural network.
Natural Language Processing (NLP) techniques can be applied to help with the diagnosis of medical conditions such as depression, using a collection of a person's utterances.
Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones.
Understanding the per-layer learning dynamics of deep neural networks is of significant interest as it may provide insights into how neural networks learn and the potential for better training regimens.
Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute.
In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging.