no code implementations • 4 Oct 2023 • William Kong, Andrés Muñoz Medina, Mónica Ribero
To overcome this issue, we develop a new DP-SGD variant for similarity based loss functions -- in particular the commonly used contrastive loss -- that manipulates gradients of the objective function in a novel way to obtain a senstivity of the summed gradient that is $O(1)$ for batch size $n$.
no code implementations • 8 Oct 2020 • Andrés Muñoz Medina, Jenny Gillenwater
Given a particular dataset and a statistic (e. g., median, mode), this function family assigns utility to a possible output o based on the number of individuals whose data would have to be added to or removed from the dataset in order for the statistic to take on value o.
no code implementations • 2 Jul 2020 • Andrés Muñoz Medina, Umar Syed, Sergei Vassilvitskii, Ellen Vitercik
We also prove a lower bound demonstrating that the difference between the objective value of our algorithm's solution and the optimal solution is tight up to logarithmic factors among all differentially private algorithms.
no code implementations • 14 Feb 2018 • Andrés Muñoz Medina, Sergei Vassilvitskii, Dong Yin
The rollout of new versions of a feature in modern applications is a manual multi-stage process, as the feature is released to ever larger groups of users, while its performance is carefully monitored.
no code implementations • NeurIPS 2017 • Andrés Muñoz Medina, Sergei Vassilvitskii
In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem.