no code implementations • 14 Feb 2024 • Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye
We here propose to use copyright traps, the inclusion of fictitious entries in original content, to detect the use of copyrighted materials in LLMs with a focus on models where memorization does not naturally occur.
no code implementations • 15 Feb 2022 • Pierre Stock, Igor Shilov, Ilya Mironov, Alexandre Sablayrolles
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
3 code implementations • 25 Sep 2021 • Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus. ai).
1 code implementation • NeurIPS 2021 • Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.