1 code implementation • 23 Oct 2024 • Ivoline C. Ngong, Joseph P. Near, Niloofar Mireshghallah
Differentially private SGD (DPSGD) enables privacy-preserving training of language models, but often reduces utility, diversity, and linguistic quality.
1 code implementation • 10 Feb 2022 • Timothy Stevens, Ivoline C. Ngong, David Darais, Calvin Hirsch, David Slater, Joseph P. Near
We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning.
no code implementations • 9 Feb 2022 • Krystal Maughan, Ivoline C. Ngong, Joseph P. Near
As AI-based systems increasingly impact many areas of our lives, auditing these systems for fairness is an increasingly high-stakes problem.
no code implementations • 30 Nov 2020 • Ivoline C. Ngong, Krystal Maughan, Joseph P. Near
Group fairness metrics can detect when a deep learning model behaves differently for advantaged and disadvantaged groups, but even models that score well on these metrics can make blatantly unfair predictions.
no code implementations • 28 Sep 2020 • Krystal Maughan, Joseph P. Near
Deep learning has produced big advances in artificial intelligence, but trained neural networks often reflect and amplify bias in their training data, and thus produce unfair predictions.
1 code implementation • 20 Sep 2018 • Noah Johnson, Joseph P. Near, Joseph M. Hellerstein, Dawn Song
Differential privacy is fast becoming the gold standard in enabling statistical analysis of data while protecting the privacy of individuals.
Cryptography and Security
2 code implementations • 28 Jun 2017 • Noah Johnson, Joseph P. Near, Dawn Song
To meet these requirements we propose elastic sensitivity, a novel method for approximating the local sensitivity of queries with general equijoins.
Cryptography and Security Databases