The videos were recorded in multiple U. S. states with a diverse set of adults in various age, gender and apparent skin tone groups.
This work examines the vulnerability of multimodal (image + text) models to adversarial threats similar to those discussed in previous literature on unimodal (image- or text-only) models.
We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks.
In addition to Deepfakes, a variety of GAN-based face swapping methods have also been published with accompanying code.
Due to respectively limited training data, different entities addressing the same vision task based on certain sensitive images may not train a robust deep network.
In this paper, we introduce a preview of the Deepfakes Detection Challenge (DFDC) dataset consisting of 5K videos featuring two facial modification algorithms.
This paper introduces a novel approach to in-painting where the identity of the object to remove or change is preserved and accounted for at inference time: Exemplar GANs (ExGANs).