1 code implementation • 27 Aug 2022 • Sasikanth Kotti, Mayank Vatsa, Richa Singh
Datasets for training face verification systems are difficult to obtain and prone to privacy issues.
no code implementations • 24 Jun 2021 • Rakshit Naidu, Aman Priyanshu, Aadith Kumar, Sasikanth Kotti, Haofan Wang, FatemehSadat Mireshghallah
Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off.
1 code implementation • 22 Jun 2021 • Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, FatemehSadat Mireshghallah, Andrew Trask
Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones.
1 code implementation • 27 May 2020 • Sahib Singh, Harshvardhan Sikka, Sasikanth Kotti, Andrew Trask
In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging.