no code implementations • 27 Oct 2023 • Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang
We construct the first provable watermarking scheme for language models with public detectability or verifiability: we use a private key for watermarking and a public key for watermark detection.
no code implementations • 27 Aug 2022 • Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang
In particular, for computationally bounded learners, we extend the recent result of Bubeck and Sellke [NeurIPS'2021] which shows that robust models might need more parameters, to the computational regime and show that bounded learners could provably need an even larger number of parameters.
no code implementations • 7 Feb 2022 • Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan
Privacy attacks on machine learning models aim to identify the data that is used to train such models.
2 code implementations • 10 Nov 2020 • Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramer
A private machine learning algorithm hides as much as possible about its training data while still preserving accuracy.
no code implementations • NeurIPS 2021 • Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta
Some of the stronger poisoning attacks require the full knowledge of the training data.
no code implementations • 25 Feb 2020 • Sanjam Garg, Shafi Goldwasser, Prashant Nalini Vasudevan
In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data.
Cryptography and Security
no code implementations • 28 May 2019 • Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody
On the reverse directions, we also show that the existence of such learning task in which computational robustness beats information theoretic robustness requires computational hardness by implying (average-case) hardness of NP.