no code implementations • 15 Nov 2023 • Thomas Cilloni, Charles Fleming, Charles Walter
Our methodology involves observing the output of a stable diffusion model at different generative epochs and training a classification model to distinguish when a series of intermediates originated from a training sample or not.
no code implementations • 19 May 2022 • Thomas Cilloni, Charles Walter, Charles Fleming
Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations.
no code implementations • 20 Oct 2020 • Thomas Cilloni, Wei Wang, Charles Walter, Charles Fleming
In this paper we propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples, preventing the formation of identifiable user clusters in the embedding space of facial encoders.