no code implementations • 12 Nov 2023 • Lauren Watson, Eric Gan, Mohan Dantam, Baharan Mirzasoleiman, Rik Sarkar
Differentially private stochastic gradient descent (DP-SGD) is known to have poorer training and test performance on large neural networks, compared to ordinary stochastic gradient descent (SGD).
no code implementations • 9 Nov 2023 • Lauren Watson, Zeno Kujawa, Rayna Andreeva, Hao-Tsung Yang, Tariq Elahi, Rik Sarkar
In pre-trained networks the approach is found to bring more efficiency in terms of accurate evaluation using small subsets.
1 code implementation • 8 Jun 2023 • Gonzalo Martínez, Lauren Watson, Pedro Reviriego, José Alberto Hernández, Marc Juarez, Rik Sarkar
Our results show that the quality and diversity of the generated images can degrade over time suggesting that incorporating AI-created data can have undesired effects on future versions of generative models.
no code implementations • 17 Feb 2023 • Gonzalo Martínez, Lauren Watson, Pedro Reviriego, José Alberto Hernández, Marc Juarez, Rik Sarkar
Therefore, future versions of generative AI tools will be trained with Internet data that is a mix of original and AI-generated data.
no code implementations • 1 Jun 2022 • Lauren Watson, Rayna Andreeva, Hao-Tsung Yang, Rik Sarkar
The Shapley value has been proposed as a solution to many applications in machine learning, including for equitable valuation of data.
no code implementations • 7 Mar 2022 • Lauren Watson, Abhirup Ghosh, Benedek Rozemberczki, Rik Sarkar
One version of the algorithm uses the entire data history to improve the model for the recent window.
2 code implementations • 11 Feb 2022 • Benedek Rozemberczki, Lauren Watson, Péter Bayer, Hao-Tsung Yang, Olivér Kiss, Sebastian Nilsson, Rik Sarkar
Over the last few years, the Shapley value, a solution concept from cooperative game theory, has found numerous applications in machine learning.
1 code implementation • ICLR 2022 • Lauren Watson, Chuan Guo, Graham Cormode, Alex Sablayrolles
The vulnerability of machine learning models to membership inference attacks has received much attention in recent years.
no code implementations • 25 Jun 2020 • Lauren Watson, Benedek Rozemberczki, Rik Sarkar
Private machine learning involves addition of noise while training, resulting in lower accuracy.